00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 107 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3285 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.057 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.083 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.566 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.578 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.590 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.590 > git config core.sparsecheckout # timeout=10 00:00:04.601 > git read-tree -mu HEAD # timeout=10 00:00:04.617 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:04.639 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:04.639 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:04.764 [Pipeline] Start of Pipeline 00:00:04.782 [Pipeline] library 00:00:04.783 Loading library shm_lib@master 00:00:04.783 Library shm_lib@master is cached. Copying from home. 00:00:04.799 [Pipeline] node 00:00:04.815 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.817 [Pipeline] { 00:00:04.827 [Pipeline] catchError 00:00:04.829 [Pipeline] { 00:00:04.842 [Pipeline] wrap 00:00:04.850 [Pipeline] { 00:00:04.858 [Pipeline] stage 00:00:04.861 [Pipeline] { (Prologue) 00:00:04.883 [Pipeline] echo 00:00:04.885 Node: VM-host-WFP1 00:00:04.891 [Pipeline] cleanWs 00:00:04.903 [WS-CLEANUP] Deleting project workspace... 00:00:04.903 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.909 [WS-CLEANUP] done 00:00:05.112 [Pipeline] setCustomBuildProperty 00:00:05.190 [Pipeline] httpRequest 00:00:05.209 [Pipeline] echo 00:00:05.211 Sorcerer 10.211.164.101 is alive 00:00:05.217 [Pipeline] httpRequest 00:00:05.221 HttpMethod: GET 00:00:05.222 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.222 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.234 Response Code: HTTP/1.1 200 OK 00:00:05.235 Success: Status code 200 is in the accepted range: 200,404 00:00:05.235 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.140 [Pipeline] sh 00:00:10.423 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:10.439 [Pipeline] httpRequest 00:00:10.456 [Pipeline] echo 00:00:10.458 Sorcerer 10.211.164.101 is alive 00:00:10.468 [Pipeline] httpRequest 00:00:10.472 HttpMethod: GET 00:00:10.473 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:10.473 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:10.498 Response Code: HTTP/1.1 200 OK 00:00:10.498 Success: Status code 200 is in the accepted range: 200,404 00:00:10.499 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:37.250 [Pipeline] sh 00:01:37.530 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:40.184 [Pipeline] sh 00:01:40.458 + git -C spdk log --oneline -n5 00:01:40.458 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:40.458 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:40.458 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:40.458 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:40.458 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:40.477 [Pipeline] withCredentials 00:01:40.486 > git --version # timeout=10 00:01:40.497 > git --version # 'git version 2.39.2' 00:01:40.512 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:40.514 [Pipeline] { 00:01:40.523 [Pipeline] retry 00:01:40.524 [Pipeline] { 00:01:40.538 [Pipeline] sh 00:01:40.813 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:41.756 [Pipeline] } 00:01:41.777 [Pipeline] // retry 00:01:41.799 [Pipeline] } 00:01:41.813 [Pipeline] // withCredentials 00:01:41.819 [Pipeline] httpRequest 00:01:41.831 [Pipeline] echo 00:01:41.832 Sorcerer 10.211.164.101 is alive 00:01:41.838 [Pipeline] httpRequest 00:01:41.842 HttpMethod: GET 00:01:41.842 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.842 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.845 Response Code: HTTP/1.1 200 OK 00:01:41.845 Success: Status code 200 is in the accepted range: 200,404 00:01:41.845 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:44.271 [Pipeline] sh 00:01:44.547 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:45.930 [Pipeline] sh 00:01:46.207 + git -C dpdk log --oneline -n5 00:01:46.207 eeb0605f11 version: 23.11.0 00:01:46.207 238778122a doc: update release notes for 23.11 00:01:46.207 46aa6b3cfc doc: fix description of RSS features 00:01:46.207 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:46.207 7e421ae345 devtools: support skipping forbid rule check 00:01:46.223 [Pipeline] writeFile 00:01:46.237 [Pipeline] sh 00:01:46.514 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.527 [Pipeline] sh 00:01:46.808 + cat autorun-spdk.conf 00:01:46.808 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.808 SPDK_TEST_NVME=1 00:01:46.808 SPDK_TEST_FTL=1 00:01:46.808 SPDK_TEST_ISAL=1 00:01:46.808 SPDK_RUN_ASAN=1 00:01:46.808 SPDK_RUN_UBSAN=1 00:01:46.808 SPDK_TEST_XNVME=1 00:01:46.808 SPDK_TEST_NVME_FDP=1 00:01:46.808 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:46.808 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.808 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.815 RUN_NIGHTLY=1 00:01:46.817 [Pipeline] } 00:01:46.834 [Pipeline] // stage 00:01:46.852 [Pipeline] stage 00:01:46.855 [Pipeline] { (Run VM) 00:01:46.871 [Pipeline] sh 00:01:47.153 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:47.153 + echo 'Start stage prepare_nvme.sh' 00:01:47.153 Start stage prepare_nvme.sh 00:01:47.153 + [[ -n 6 ]] 00:01:47.153 + disk_prefix=ex6 00:01:47.153 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:47.153 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:47.153 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:47.153 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.153 ++ SPDK_TEST_NVME=1 00:01:47.153 ++ SPDK_TEST_FTL=1 00:01:47.153 ++ SPDK_TEST_ISAL=1 00:01:47.153 ++ SPDK_RUN_ASAN=1 00:01:47.153 ++ SPDK_RUN_UBSAN=1 00:01:47.153 ++ SPDK_TEST_XNVME=1 00:01:47.153 ++ SPDK_TEST_NVME_FDP=1 00:01:47.153 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.153 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:47.153 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.153 ++ RUN_NIGHTLY=1 00:01:47.153 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:47.153 + nvme_files=() 00:01:47.153 + declare -A nvme_files 00:01:47.153 + backend_dir=/var/lib/libvirt/images/backends 00:01:47.153 + nvme_files['nvme.img']=5G 00:01:47.153 + nvme_files['nvme-cmb.img']=5G 00:01:47.153 + nvme_files['nvme-multi0.img']=4G 00:01:47.153 + nvme_files['nvme-multi1.img']=4G 00:01:47.153 + nvme_files['nvme-multi2.img']=4G 00:01:47.153 + nvme_files['nvme-openstack.img']=8G 00:01:47.153 + nvme_files['nvme-zns.img']=5G 00:01:47.153 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:47.153 + (( SPDK_TEST_FTL == 1 )) 00:01:47.153 + nvme_files["nvme-ftl.img"]=6G 00:01:47.153 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:47.153 + nvme_files["nvme-fdp.img"]=1G 00:01:47.153 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:47.153 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:47.153 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:47.153 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:47.153 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:47.153 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.153 + for nvme in "${!nvme_files[@]}" 00:01:47.153 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:47.412 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.412 + for nvme in "${!nvme_files[@]}" 00:01:47.412 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:47.412 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.412 + for nvme in "${!nvme_files[@]}" 00:01:47.412 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:47.412 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:47.412 + for nvme in "${!nvme_files[@]}" 00:01:47.412 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:47.412 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.412 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:47.412 + echo 'End stage prepare_nvme.sh' 00:01:47.412 End stage prepare_nvme.sh 00:01:47.424 [Pipeline] sh 00:01:47.704 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:47.705 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:47.705 00:01:47.705 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:47.705 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:47.705 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:47.705 HELP=0 00:01:47.705 DRY_RUN=0 00:01:47.705 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:47.705 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:47.705 NVME_AUTO_CREATE=0 00:01:47.705 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:47.705 NVME_CMB=,,,, 00:01:47.705 NVME_PMR=,,,, 00:01:47.705 NVME_ZNS=,,,, 00:01:47.705 NVME_MS=true,,,, 00:01:47.705 NVME_FDP=,,,on, 00:01:47.705 SPDK_VAGRANT_DISTRO=fedora38 00:01:47.705 SPDK_VAGRANT_VMCPU=10 00:01:47.705 SPDK_VAGRANT_VMRAM=12288 00:01:47.705 SPDK_VAGRANT_PROVIDER=libvirt 00:01:47.705 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:47.705 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:47.705 SPDK_OPENSTACK_NETWORK=0 00:01:47.705 VAGRANT_PACKAGE_BOX=0 00:01:47.705 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:47.705 FORCE_DISTRO=true 00:01:47.705 VAGRANT_BOX_VERSION= 00:01:47.705 EXTRA_VAGRANTFILES= 00:01:47.705 NIC_MODEL=e1000 00:01:47.705 00:01:47.705 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:47.705 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:50.235 Bringing machine 'default' up with 'libvirt' provider... 00:01:51.173 ==> default: Creating image (snapshot of base box volume). 00:01:51.432 ==> default: Creating domain with the following settings... 00:01:51.432 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721524295_de0ce0835bddb72c45f1 00:01:51.432 ==> default: -- Domain type: kvm 00:01:51.432 ==> default: -- Cpus: 10 00:01:51.432 ==> default: -- Feature: acpi 00:01:51.432 ==> default: -- Feature: apic 00:01:51.432 ==> default: -- Feature: pae 00:01:51.432 ==> default: -- Memory: 12288M 00:01:51.432 ==> default: -- Memory Backing: hugepages: 00:01:51.432 ==> default: -- Management MAC: 00:01:51.432 ==> default: -- Loader: 00:01:51.432 ==> default: -- Nvram: 00:01:51.432 ==> default: -- Base box: spdk/fedora38 00:01:51.432 ==> default: -- Storage pool: default 00:01:51.432 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721524295_de0ce0835bddb72c45f1.img (20G) 00:01:51.432 ==> default: -- Volume Cache: default 00:01:51.432 ==> default: -- Kernel: 00:01:51.432 ==> default: -- Initrd: 00:01:51.432 ==> default: -- Graphics Type: vnc 00:01:51.432 ==> default: -- Graphics Port: -1 00:01:51.432 ==> default: -- Graphics IP: 127.0.0.1 00:01:51.432 ==> default: -- Graphics Password: Not defined 00:01:51.432 ==> default: -- Video Type: cirrus 00:01:51.432 ==> default: -- Video VRAM: 9216 00:01:51.432 ==> default: -- Sound Type: 00:01:51.432 ==> default: -- Keymap: en-us 00:01:51.432 ==> default: -- TPM Path: 00:01:51.432 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:51.432 ==> default: -- Command line args: 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:51.432 ==> default: -> value=-drive, 00:01:51.432 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:51.432 ==> default: -> value=-device, 00:01:51.432 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.691 ==> default: Creating shared folders metadata... 00:01:51.691 ==> default: Starting domain. 00:01:53.596 ==> default: Waiting for domain to get an IP address... 00:02:08.483 ==> default: Waiting for SSH to become available... 00:02:10.429 ==> default: Configuring and enabling network interfaces... 00:02:15.742 default: SSH address: 192.168.121.114:22 00:02:15.742 default: SSH username: vagrant 00:02:15.742 default: SSH auth method: private key 00:02:19.032 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:27.151 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:33.715 ==> default: Mounting SSHFS shared folder... 00:02:36.248 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:36.248 ==> default: Checking Mount.. 00:02:37.627 ==> default: Folder Successfully Mounted! 00:02:37.627 ==> default: Running provisioner: file... 00:02:39.002 default: ~/.gitconfig => .gitconfig 00:02:39.568 00:02:39.568 SUCCESS! 00:02:39.568 00:02:39.568 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:39.568 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:39.568 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:39.568 00:02:39.576 [Pipeline] } 00:02:39.595 [Pipeline] // stage 00:02:39.604 [Pipeline] dir 00:02:39.605 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:39.606 [Pipeline] { 00:02:39.621 [Pipeline] catchError 00:02:39.623 [Pipeline] { 00:02:39.637 [Pipeline] sh 00:02:39.912 + vagrant ssh-config --host vagrant 00:02:39.912 + sed -ne /^Host/,$p 00:02:39.912 + tee ssh_conf 00:02:42.441 Host vagrant 00:02:42.441 HostName 192.168.121.114 00:02:42.441 User vagrant 00:02:42.441 Port 22 00:02:42.441 UserKnownHostsFile /dev/null 00:02:42.441 StrictHostKeyChecking no 00:02:42.441 PasswordAuthentication no 00:02:42.441 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:42.441 IdentitiesOnly yes 00:02:42.441 LogLevel FATAL 00:02:42.441 ForwardAgent yes 00:02:42.441 ForwardX11 yes 00:02:42.441 00:02:42.456 [Pipeline] withEnv 00:02:42.458 [Pipeline] { 00:02:42.479 [Pipeline] sh 00:02:42.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:42.764 source /etc/os-release 00:02:42.764 [[ -e /image.version ]] && img=$(< /image.version) 00:02:42.764 # Minimal, systemd-like check. 00:02:42.764 if [[ -e /.dockerenv ]]; then 00:02:42.764 # Clear garbage from the node's name: 00:02:42.764 # agt-er_autotest_547-896 -> autotest_547-896 00:02:42.764 # $HOSTNAME is the actual container id 00:02:42.764 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:42.764 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:42.764 # We can assume this is a mount from a host where container is running, 00:02:42.764 # so fetch its hostname to easily identify the target swarm worker. 00:02:42.764 container="$(< /etc/hostname) ($agent)" 00:02:42.764 else 00:02:42.764 # Fallback 00:02:42.764 container=$agent 00:02:42.764 fi 00:02:42.764 fi 00:02:42.764 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:42.764 00:02:43.035 [Pipeline] } 00:02:43.048 [Pipeline] // withEnv 00:02:43.054 [Pipeline] setCustomBuildProperty 00:02:43.065 [Pipeline] stage 00:02:43.067 [Pipeline] { (Tests) 00:02:43.086 [Pipeline] sh 00:02:43.388 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:43.661 [Pipeline] sh 00:02:43.941 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:44.217 [Pipeline] timeout 00:02:44.217 Timeout set to expire in 40 min 00:02:44.219 [Pipeline] { 00:02:44.235 [Pipeline] sh 00:02:44.515 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:45.083 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:45.096 [Pipeline] sh 00:02:45.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:45.653 [Pipeline] sh 00:02:45.931 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:46.209 [Pipeline] sh 00:02:46.490 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:46.749 ++ readlink -f spdk_repo 00:02:46.749 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:46.749 + [[ -n /home/vagrant/spdk_repo ]] 00:02:46.749 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:46.749 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:46.749 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:46.749 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:46.749 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:46.749 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:46.749 + cd /home/vagrant/spdk_repo 00:02:46.749 + source /etc/os-release 00:02:46.749 ++ NAME='Fedora Linux' 00:02:46.749 ++ VERSION='38 (Cloud Edition)' 00:02:46.749 ++ ID=fedora 00:02:46.749 ++ VERSION_ID=38 00:02:46.749 ++ VERSION_CODENAME= 00:02:46.749 ++ PLATFORM_ID=platform:f38 00:02:46.749 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:46.749 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:46.749 ++ LOGO=fedora-logo-icon 00:02:46.749 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:46.749 ++ HOME_URL=https://fedoraproject.org/ 00:02:46.749 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:46.749 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:46.749 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:46.749 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:46.749 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:46.749 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:46.749 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:46.749 ++ SUPPORT_END=2024-05-14 00:02:46.749 ++ VARIANT='Cloud Edition' 00:02:46.749 ++ VARIANT_ID=cloud 00:02:46.749 + uname -a 00:02:46.749 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:46.749 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:47.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:47.574 Hugepages 00:02:47.574 node hugesize free / total 00:02:47.574 node0 1048576kB 0 / 0 00:02:47.574 node0 2048kB 0 / 0 00:02:47.574 00:02:47.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.574 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:47.574 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:47.574 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:47.574 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:47.574 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:47.574 + rm -f /tmp/spdk-ld-path 00:02:47.574 + source autorun-spdk.conf 00:02:47.574 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.574 ++ SPDK_TEST_NVME=1 00:02:47.574 ++ SPDK_TEST_FTL=1 00:02:47.574 ++ SPDK_TEST_ISAL=1 00:02:47.574 ++ SPDK_RUN_ASAN=1 00:02:47.574 ++ SPDK_RUN_UBSAN=1 00:02:47.574 ++ SPDK_TEST_XNVME=1 00:02:47.574 ++ SPDK_TEST_NVME_FDP=1 00:02:47.574 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:47.574 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:47.574 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.574 ++ RUN_NIGHTLY=1 00:02:47.574 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.574 + [[ -n '' ]] 00:02:47.574 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:47.574 + for M in /var/spdk/build-*-manifest.txt 00:02:47.574 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.574 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.574 + for M in /var/spdk/build-*-manifest.txt 00:02:47.574 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.574 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:47.574 ++ uname 00:02:47.574 + [[ Linux == \L\i\n\u\x ]] 00:02:47.574 + sudo dmesg -T 00:02:47.833 + sudo dmesg --clear 00:02:47.833 + dmesg_pid=5877 00:02:47.833 + [[ Fedora Linux == FreeBSD ]] 00:02:47.833 + sudo dmesg -Tw 00:02:47.833 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.833 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.833 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.833 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.833 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.833 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.833 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.833 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.833 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.833 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.833 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.833 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.833 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.833 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.833 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:47.833 Test configuration: 00:02:47.833 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.833 SPDK_TEST_NVME=1 00:02:47.833 SPDK_TEST_FTL=1 00:02:47.833 SPDK_TEST_ISAL=1 00:02:47.833 SPDK_RUN_ASAN=1 00:02:47.833 SPDK_RUN_UBSAN=1 00:02:47.833 SPDK_TEST_XNVME=1 00:02:47.833 SPDK_TEST_NVME_FDP=1 00:02:47.833 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:47.833 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:47.833 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:47.833 RUN_NIGHTLY=1 01:12:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:47.833 01:12:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.833 01:12:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.833 01:12:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.833 01:12:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 01:12:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 01:12:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.833 01:12:33 -- paths/export.sh@5 -- $ export PATH 00:02:47.834 01:12:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.834 01:12:33 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:47.834 01:12:33 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:47.834 01:12:33 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721524353.XXXXXX 00:02:47.834 01:12:33 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721524353.MzZuHL 00:02:47.834 01:12:33 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:47.834 01:12:33 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:02:47.834 01:12:33 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:47.834 01:12:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:47.834 01:12:33 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:47.834 01:12:33 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.834 01:12:33 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:47.834 01:12:33 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:47.834 01:12:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.092 01:12:33 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:48.092 01:12:33 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:48.092 01:12:33 -- pm/common@17 -- $ local monitor 00:02:48.092 01:12:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.092 01:12:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.092 01:12:33 -- pm/common@21 -- $ date +%s 00:02:48.092 01:12:33 -- pm/common@25 -- $ sleep 1 00:02:48.092 01:12:33 -- pm/common@21 -- $ date +%s 00:02:48.092 01:12:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721524353 00:02:48.092 01:12:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721524353 00:02:48.092 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721524353_collect-vmstat.pm.log 00:02:48.092 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721524353_collect-cpu-load.pm.log 00:02:49.026 01:12:34 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:49.026 01:12:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:49.026 01:12:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:49.026 01:12:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:49.026 01:12:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:49.026 Sun Jul 21 01:12:34 AM UTC 2024 00:02:49.026 01:12:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:49.026 v24.05-13-g5fa2f5086 00:02:49.026 01:12:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:49.026 01:12:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:49.026 01:12:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:49.026 01:12:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:49.026 01:12:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.026 ************************************ 00:02:49.026 START TEST asan 00:02:49.026 ************************************ 00:02:49.026 using asan 00:02:49.026 01:12:34 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:49.026 00:02:49.026 real 0m0.001s 00:02:49.026 user 0m0.000s 00:02:49.026 sys 0m0.000s 00:02:49.026 01:12:34 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:49.026 01:12:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:49.026 ************************************ 00:02:49.026 END TEST asan 00:02:49.026 ************************************ 00:02:49.026 01:12:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:49.026 01:12:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:49.026 01:12:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:49.026 01:12:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:49.026 01:12:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.026 ************************************ 00:02:49.026 START TEST ubsan 00:02:49.026 ************************************ 00:02:49.026 using ubsan 00:02:49.026 01:12:34 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:49.026 00:02:49.026 real 0m0.000s 00:02:49.026 user 0m0.000s 00:02:49.026 sys 0m0.000s 00:02:49.026 01:12:34 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:49.026 01:12:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:49.026 ************************************ 00:02:49.026 END TEST ubsan 00:02:49.026 ************************************ 00:02:49.284 01:12:34 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:49.284 01:12:34 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:49.284 01:12:34 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:49.284 01:12:34 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:49.284 01:12:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:49.284 01:12:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.284 ************************************ 00:02:49.284 START TEST build_native_dpdk 00:02:49.284 ************************************ 00:02:49.284 01:12:34 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:49.284 eeb0605f11 version: 23.11.0 00:02:49.284 238778122a doc: update release notes for 23.11 00:02:49.284 46aa6b3cfc doc: fix description of RSS features 00:02:49.284 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:49.284 7e421ae345 devtools: support skipping forbid rule check 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:49.284 01:12:34 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:49.284 patching file config/rte_config.h 00:02:49.284 Hunk #1 succeeded at 60 (offset 1 line). 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:49.284 01:12:34 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:54.552 The Meson build system 00:02:54.552 Version: 1.3.1 00:02:54.552 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:54.552 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:54.552 Build type: native build 00:02:54.552 Program cat found: YES (/usr/bin/cat) 00:02:54.552 Project name: DPDK 00:02:54.552 Project version: 23.11.0 00:02:54.552 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:54.552 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:54.552 Host machine cpu family: x86_64 00:02:54.552 Host machine cpu: x86_64 00:02:54.552 Message: ## Building in Developer Mode ## 00:02:54.552 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:54.552 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:54.552 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:54.552 Program python3 found: YES (/usr/bin/python3) 00:02:54.552 Program cat found: YES (/usr/bin/cat) 00:02:54.552 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:54.552 Compiler for C supports arguments -march=native: YES 00:02:54.552 Checking for size of "void *" : 8 00:02:54.552 Checking for size of "void *" : 8 (cached) 00:02:54.552 Library m found: YES 00:02:54.552 Library numa found: YES 00:02:54.552 Has header "numaif.h" : YES 00:02:54.552 Library fdt found: NO 00:02:54.552 Library execinfo found: NO 00:02:54.552 Has header "execinfo.h" : YES 00:02:54.552 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:54.552 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:54.552 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:54.552 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:54.552 Run-time dependency openssl found: YES 3.0.9 00:02:54.552 Run-time dependency libpcap found: YES 1.10.4 00:02:54.552 Has header "pcap.h" with dependency libpcap: YES 00:02:54.552 Compiler for C supports arguments -Wcast-qual: YES 00:02:54.552 Compiler for C supports arguments -Wdeprecated: YES 00:02:54.552 Compiler for C supports arguments -Wformat: YES 00:02:54.552 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:54.552 Compiler for C supports arguments -Wformat-security: NO 00:02:54.552 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:54.552 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:54.552 Compiler for C supports arguments -Wnested-externs: YES 00:02:54.552 Compiler for C supports arguments -Wold-style-definition: YES 00:02:54.552 Compiler for C supports arguments -Wpointer-arith: YES 00:02:54.552 Compiler for C supports arguments -Wsign-compare: YES 00:02:54.552 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:54.552 Compiler for C supports arguments -Wundef: YES 00:02:54.552 Compiler for C supports arguments -Wwrite-strings: YES 00:02:54.552 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:54.552 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:54.552 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:54.552 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:54.552 Program objdump found: YES (/usr/bin/objdump) 00:02:54.552 Compiler for C supports arguments -mavx512f: YES 00:02:54.552 Checking if "AVX512 checking" compiles: YES 00:02:54.552 Fetching value of define "__SSE4_2__" : 1 00:02:54.552 Fetching value of define "__AES__" : 1 00:02:54.552 Fetching value of define "__AVX__" : 1 00:02:54.552 Fetching value of define "__AVX2__" : 1 00:02:54.552 Fetching value of define "__AVX512BW__" : 1 00:02:54.552 Fetching value of define "__AVX512CD__" : 1 00:02:54.552 Fetching value of define "__AVX512DQ__" : 1 00:02:54.552 Fetching value of define "__AVX512F__" : 1 00:02:54.552 Fetching value of define "__AVX512VL__" : 1 00:02:54.552 Fetching value of define "__PCLMUL__" : 1 00:02:54.552 Fetching value of define "__RDRND__" : 1 00:02:54.552 Fetching value of define "__RDSEED__" : 1 00:02:54.552 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:54.552 Fetching value of define "__znver1__" : (undefined) 00:02:54.552 Fetching value of define "__znver2__" : (undefined) 00:02:54.552 Fetching value of define "__znver3__" : (undefined) 00:02:54.552 Fetching value of define "__znver4__" : (undefined) 00:02:54.552 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:54.552 Message: lib/log: Defining dependency "log" 00:02:54.552 Message: lib/kvargs: Defining dependency "kvargs" 00:02:54.552 Message: lib/telemetry: Defining dependency "telemetry" 00:02:54.552 Checking for function "getentropy" : NO 00:02:54.552 Message: lib/eal: Defining dependency "eal" 00:02:54.552 Message: lib/ring: Defining dependency "ring" 00:02:54.552 Message: lib/rcu: Defining dependency "rcu" 00:02:54.552 Message: lib/mempool: Defining dependency "mempool" 00:02:54.552 Message: lib/mbuf: Defining dependency "mbuf" 00:02:54.552 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:54.552 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:54.552 Compiler for C supports arguments -mpclmul: YES 00:02:54.552 Compiler for C supports arguments -maes: YES 00:02:54.552 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:54.552 Compiler for C supports arguments -mavx512bw: YES 00:02:54.552 Compiler for C supports arguments -mavx512dq: YES 00:02:54.552 Compiler for C supports arguments -mavx512vl: YES 00:02:54.552 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:54.552 Compiler for C supports arguments -mavx2: YES 00:02:54.552 Compiler for C supports arguments -mavx: YES 00:02:54.552 Message: lib/net: Defining dependency "net" 00:02:54.552 Message: lib/meter: Defining dependency "meter" 00:02:54.552 Message: lib/ethdev: Defining dependency "ethdev" 00:02:54.552 Message: lib/pci: Defining dependency "pci" 00:02:54.552 Message: lib/cmdline: Defining dependency "cmdline" 00:02:54.552 Message: lib/metrics: Defining dependency "metrics" 00:02:54.552 Message: lib/hash: Defining dependency "hash" 00:02:54.552 Message: lib/timer: Defining dependency "timer" 00:02:54.552 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:54.552 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:54.552 Message: lib/acl: Defining dependency "acl" 00:02:54.552 Message: lib/bbdev: Defining dependency "bbdev" 00:02:54.552 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:54.553 Run-time dependency libelf found: YES 0.190 00:02:54.553 Message: lib/bpf: Defining dependency "bpf" 00:02:54.553 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:54.553 Message: lib/compressdev: Defining dependency "compressdev" 00:02:54.553 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:54.553 Message: lib/distributor: Defining dependency "distributor" 00:02:54.553 Message: lib/dmadev: Defining dependency "dmadev" 00:02:54.553 Message: lib/efd: Defining dependency "efd" 00:02:54.553 Message: lib/eventdev: Defining dependency "eventdev" 00:02:54.553 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:54.553 Message: lib/gpudev: Defining dependency "gpudev" 00:02:54.553 Message: lib/gro: Defining dependency "gro" 00:02:54.553 Message: lib/gso: Defining dependency "gso" 00:02:54.553 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:54.553 Message: lib/jobstats: Defining dependency "jobstats" 00:02:54.553 Message: lib/latencystats: Defining dependency "latencystats" 00:02:54.553 Message: lib/lpm: Defining dependency "lpm" 00:02:54.553 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.553 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:54.553 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:54.553 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:54.553 Message: lib/member: Defining dependency "member" 00:02:54.553 Message: lib/pcapng: Defining dependency "pcapng" 00:02:54.553 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:54.553 Message: lib/power: Defining dependency "power" 00:02:54.553 Message: lib/rawdev: Defining dependency "rawdev" 00:02:54.553 Message: lib/regexdev: Defining dependency "regexdev" 00:02:54.553 Message: lib/mldev: Defining dependency "mldev" 00:02:54.553 Message: lib/rib: Defining dependency "rib" 00:02:54.553 Message: lib/reorder: Defining dependency "reorder" 00:02:54.553 Message: lib/sched: Defining dependency "sched" 00:02:54.553 Message: lib/security: Defining dependency "security" 00:02:54.553 Message: lib/stack: Defining dependency "stack" 00:02:54.553 Has header "linux/userfaultfd.h" : YES 00:02:54.553 Has header "linux/vduse.h" : YES 00:02:54.553 Message: lib/vhost: Defining dependency "vhost" 00:02:54.553 Message: lib/ipsec: Defining dependency "ipsec" 00:02:54.553 Message: lib/pdcp: Defining dependency "pdcp" 00:02:54.553 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.553 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:54.553 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:54.553 Message: lib/fib: Defining dependency "fib" 00:02:54.553 Message: lib/port: Defining dependency "port" 00:02:54.553 Message: lib/pdump: Defining dependency "pdump" 00:02:54.553 Message: lib/table: Defining dependency "table" 00:02:54.553 Message: lib/pipeline: Defining dependency "pipeline" 00:02:54.553 Message: lib/graph: Defining dependency "graph" 00:02:54.553 Message: lib/node: Defining dependency "node" 00:02:54.553 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:54.553 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:54.553 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.960 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:55.960 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:55.960 Compiler for C supports arguments -Wno-unused-value: YES 00:02:55.960 Compiler for C supports arguments -Wno-format: YES 00:02:55.960 Compiler for C supports arguments -Wno-format-security: YES 00:02:55.960 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:55.960 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.960 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:55.960 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:55.960 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.960 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:55.960 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.960 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:55.960 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:55.960 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:55.960 Has header "sys/epoll.h" : YES 00:02:55.960 Program doxygen found: YES (/usr/bin/doxygen) 00:02:55.960 Configuring doxy-api-html.conf using configuration 00:02:55.960 Configuring doxy-api-man.conf using configuration 00:02:55.960 Program mandb found: YES (/usr/bin/mandb) 00:02:55.960 Program sphinx-build found: NO 00:02:55.960 Configuring rte_build_config.h using configuration 00:02:55.960 Message: 00:02:55.960 ================= 00:02:55.960 Applications Enabled 00:02:55.960 ================= 00:02:55.960 00:02:55.960 apps: 00:02:55.960 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:55.960 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:55.960 test-pmd, test-regex, test-sad, test-security-perf, 00:02:55.960 00:02:55.960 Message: 00:02:55.960 ================= 00:02:55.960 Libraries Enabled 00:02:55.960 ================= 00:02:55.960 00:02:55.960 libs: 00:02:55.960 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:55.960 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:55.960 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:55.960 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:55.960 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:55.960 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:55.960 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:55.960 00:02:55.960 00:02:55.960 Message: 00:02:55.960 =============== 00:02:55.960 Drivers Enabled 00:02:55.960 =============== 00:02:55.960 00:02:55.960 common: 00:02:55.960 00:02:55.960 bus: 00:02:55.960 pci, vdev, 00:02:55.960 mempool: 00:02:55.960 ring, 00:02:55.960 dma: 00:02:55.960 00:02:55.960 net: 00:02:55.960 i40e, 00:02:55.960 raw: 00:02:55.960 00:02:55.960 crypto: 00:02:55.960 00:02:55.960 compress: 00:02:55.960 00:02:55.960 regex: 00:02:55.960 00:02:55.960 ml: 00:02:55.960 00:02:55.960 vdpa: 00:02:55.960 00:02:55.960 event: 00:02:55.960 00:02:55.960 baseband: 00:02:55.960 00:02:55.960 gpu: 00:02:55.960 00:02:55.960 00:02:55.960 Message: 00:02:55.960 ================= 00:02:55.960 Content Skipped 00:02:55.960 ================= 00:02:55.960 00:02:55.960 apps: 00:02:55.960 00:02:55.960 libs: 00:02:55.960 00:02:55.960 drivers: 00:02:55.960 common/cpt: not in enabled drivers build config 00:02:55.960 common/dpaax: not in enabled drivers build config 00:02:55.960 common/iavf: not in enabled drivers build config 00:02:55.960 common/idpf: not in enabled drivers build config 00:02:55.961 common/mvep: not in enabled drivers build config 00:02:55.961 common/octeontx: not in enabled drivers build config 00:02:55.961 bus/auxiliary: not in enabled drivers build config 00:02:55.961 bus/cdx: not in enabled drivers build config 00:02:55.961 bus/dpaa: not in enabled drivers build config 00:02:55.961 bus/fslmc: not in enabled drivers build config 00:02:55.961 bus/ifpga: not in enabled drivers build config 00:02:55.961 bus/platform: not in enabled drivers build config 00:02:55.961 bus/vmbus: not in enabled drivers build config 00:02:55.961 common/cnxk: not in enabled drivers build config 00:02:55.961 common/mlx5: not in enabled drivers build config 00:02:55.961 common/nfp: not in enabled drivers build config 00:02:55.961 common/qat: not in enabled drivers build config 00:02:55.961 common/sfc_efx: not in enabled drivers build config 00:02:55.961 mempool/bucket: not in enabled drivers build config 00:02:55.961 mempool/cnxk: not in enabled drivers build config 00:02:55.961 mempool/dpaa: not in enabled drivers build config 00:02:55.961 mempool/dpaa2: not in enabled drivers build config 00:02:55.961 mempool/octeontx: not in enabled drivers build config 00:02:55.961 mempool/stack: not in enabled drivers build config 00:02:55.961 dma/cnxk: not in enabled drivers build config 00:02:55.961 dma/dpaa: not in enabled drivers build config 00:02:55.961 dma/dpaa2: not in enabled drivers build config 00:02:55.961 dma/hisilicon: not in enabled drivers build config 00:02:55.961 dma/idxd: not in enabled drivers build config 00:02:55.961 dma/ioat: not in enabled drivers build config 00:02:55.961 dma/skeleton: not in enabled drivers build config 00:02:55.961 net/af_packet: not in enabled drivers build config 00:02:55.961 net/af_xdp: not in enabled drivers build config 00:02:55.961 net/ark: not in enabled drivers build config 00:02:55.961 net/atlantic: not in enabled drivers build config 00:02:55.961 net/avp: not in enabled drivers build config 00:02:55.961 net/axgbe: not in enabled drivers build config 00:02:55.961 net/bnx2x: not in enabled drivers build config 00:02:55.961 net/bnxt: not in enabled drivers build config 00:02:55.961 net/bonding: not in enabled drivers build config 00:02:55.961 net/cnxk: not in enabled drivers build config 00:02:55.961 net/cpfl: not in enabled drivers build config 00:02:55.961 net/cxgbe: not in enabled drivers build config 00:02:55.961 net/dpaa: not in enabled drivers build config 00:02:55.961 net/dpaa2: not in enabled drivers build config 00:02:55.961 net/e1000: not in enabled drivers build config 00:02:55.961 net/ena: not in enabled drivers build config 00:02:55.961 net/enetc: not in enabled drivers build config 00:02:55.961 net/enetfec: not in enabled drivers build config 00:02:55.961 net/enic: not in enabled drivers build config 00:02:55.961 net/failsafe: not in enabled drivers build config 00:02:55.961 net/fm10k: not in enabled drivers build config 00:02:55.961 net/gve: not in enabled drivers build config 00:02:55.961 net/hinic: not in enabled drivers build config 00:02:55.961 net/hns3: not in enabled drivers build config 00:02:55.961 net/iavf: not in enabled drivers build config 00:02:55.961 net/ice: not in enabled drivers build config 00:02:55.961 net/idpf: not in enabled drivers build config 00:02:55.961 net/igc: not in enabled drivers build config 00:02:55.961 net/ionic: not in enabled drivers build config 00:02:55.961 net/ipn3ke: not in enabled drivers build config 00:02:55.961 net/ixgbe: not in enabled drivers build config 00:02:55.961 net/mana: not in enabled drivers build config 00:02:55.961 net/memif: not in enabled drivers build config 00:02:55.961 net/mlx4: not in enabled drivers build config 00:02:55.961 net/mlx5: not in enabled drivers build config 00:02:55.961 net/mvneta: not in enabled drivers build config 00:02:55.961 net/mvpp2: not in enabled drivers build config 00:02:55.961 net/netvsc: not in enabled drivers build config 00:02:55.961 net/nfb: not in enabled drivers build config 00:02:55.961 net/nfp: not in enabled drivers build config 00:02:55.961 net/ngbe: not in enabled drivers build config 00:02:55.961 net/null: not in enabled drivers build config 00:02:55.961 net/octeontx: not in enabled drivers build config 00:02:55.961 net/octeon_ep: not in enabled drivers build config 00:02:55.961 net/pcap: not in enabled drivers build config 00:02:55.961 net/pfe: not in enabled drivers build config 00:02:55.961 net/qede: not in enabled drivers build config 00:02:55.961 net/ring: not in enabled drivers build config 00:02:55.961 net/sfc: not in enabled drivers build config 00:02:55.961 net/softnic: not in enabled drivers build config 00:02:55.961 net/tap: not in enabled drivers build config 00:02:55.961 net/thunderx: not in enabled drivers build config 00:02:55.961 net/txgbe: not in enabled drivers build config 00:02:55.961 net/vdev_netvsc: not in enabled drivers build config 00:02:55.961 net/vhost: not in enabled drivers build config 00:02:55.961 net/virtio: not in enabled drivers build config 00:02:55.961 net/vmxnet3: not in enabled drivers build config 00:02:55.961 raw/cnxk_bphy: not in enabled drivers build config 00:02:55.961 raw/cnxk_gpio: not in enabled drivers build config 00:02:55.961 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:55.961 raw/ifpga: not in enabled drivers build config 00:02:55.961 raw/ntb: not in enabled drivers build config 00:02:55.961 raw/skeleton: not in enabled drivers build config 00:02:55.961 crypto/armv8: not in enabled drivers build config 00:02:55.961 crypto/bcmfs: not in enabled drivers build config 00:02:55.961 crypto/caam_jr: not in enabled drivers build config 00:02:55.961 crypto/ccp: not in enabled drivers build config 00:02:55.961 crypto/cnxk: not in enabled drivers build config 00:02:55.961 crypto/dpaa_sec: not in enabled drivers build config 00:02:55.961 crypto/dpaa2_sec: not in enabled drivers build config 00:02:55.961 crypto/ipsec_mb: not in enabled drivers build config 00:02:55.961 crypto/mlx5: not in enabled drivers build config 00:02:55.961 crypto/mvsam: not in enabled drivers build config 00:02:55.961 crypto/nitrox: not in enabled drivers build config 00:02:55.961 crypto/null: not in enabled drivers build config 00:02:55.961 crypto/octeontx: not in enabled drivers build config 00:02:55.961 crypto/openssl: not in enabled drivers build config 00:02:55.961 crypto/scheduler: not in enabled drivers build config 00:02:55.961 crypto/uadk: not in enabled drivers build config 00:02:55.961 crypto/virtio: not in enabled drivers build config 00:02:55.961 compress/isal: not in enabled drivers build config 00:02:55.961 compress/mlx5: not in enabled drivers build config 00:02:55.961 compress/octeontx: not in enabled drivers build config 00:02:55.961 compress/zlib: not in enabled drivers build config 00:02:55.961 regex/mlx5: not in enabled drivers build config 00:02:55.961 regex/cn9k: not in enabled drivers build config 00:02:55.961 ml/cnxk: not in enabled drivers build config 00:02:55.961 vdpa/ifc: not in enabled drivers build config 00:02:55.961 vdpa/mlx5: not in enabled drivers build config 00:02:55.961 vdpa/nfp: not in enabled drivers build config 00:02:55.961 vdpa/sfc: not in enabled drivers build config 00:02:55.961 event/cnxk: not in enabled drivers build config 00:02:55.961 event/dlb2: not in enabled drivers build config 00:02:55.961 event/dpaa: not in enabled drivers build config 00:02:55.961 event/dpaa2: not in enabled drivers build config 00:02:55.961 event/dsw: not in enabled drivers build config 00:02:55.961 event/opdl: not in enabled drivers build config 00:02:55.961 event/skeleton: not in enabled drivers build config 00:02:55.961 event/sw: not in enabled drivers build config 00:02:55.961 event/octeontx: not in enabled drivers build config 00:02:55.961 baseband/acc: not in enabled drivers build config 00:02:55.961 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:55.961 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:55.961 baseband/la12xx: not in enabled drivers build config 00:02:55.961 baseband/null: not in enabled drivers build config 00:02:55.961 baseband/turbo_sw: not in enabled drivers build config 00:02:55.961 gpu/cuda: not in enabled drivers build config 00:02:55.961 00:02:55.961 00:02:55.961 Build targets in project: 217 00:02:55.961 00:02:55.961 DPDK 23.11.0 00:02:55.961 00:02:55.961 User defined options 00:02:55.961 libdir : lib 00:02:55.961 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:55.961 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:55.961 c_link_args : 00:02:55.961 enable_docs : false 00:02:55.961 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:55.961 enable_kmods : false 00:02:55.961 machine : native 00:02:55.961 tests : false 00:02:55.961 00:02:55.961 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.961 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:55.961 01:12:41 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:56.220 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:56.220 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:56.220 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:56.220 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.220 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:56.220 [5/707] Linking static target lib/librte_kvargs.a 00:02:56.220 [6/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:56.220 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:56.220 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:56.220 [9/707] Linking static target lib/librte_log.a 00:02:56.478 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:56.478 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.478 [12/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.478 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:56.737 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:56.737 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:56.737 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:56.737 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.737 [18/707] Linking target lib/librte_log.so.24.0 00:02:56.737 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.737 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.996 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.996 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.996 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.996 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.996 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.996 [26/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.996 [27/707] Linking static target lib/librte_telemetry.a 00:02:56.996 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.254 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.254 [30/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:57.254 [31/707] Linking target lib/librte_kvargs.so.24.0 00:02:57.254 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.254 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:57.254 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:57.254 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:57.254 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.254 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.512 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.512 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.512 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.512 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.512 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.512 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:57.512 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.512 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:57.771 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.771 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.771 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.771 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.771 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.771 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:57.771 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.029 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.029 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:58.029 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.029 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.029 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.029 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.029 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.029 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.029 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.029 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.287 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.287 [64/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.287 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.287 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.287 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.287 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.544 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.545 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.545 [71/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.545 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.545 [73/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.545 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.545 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.545 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.545 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.545 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.801 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.801 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.801 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.801 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.801 [83/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.060 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.060 [85/707] Linking static target lib/librte_ring.a 00:02:59.060 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.060 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.060 [88/707] Linking static target lib/librte_eal.a 00:02:59.060 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.318 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.318 [91/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.318 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.318 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.318 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.318 [95/707] Linking static target lib/librte_mempool.a 00:02:59.577 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.577 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.577 [98/707] Linking static target lib/librte_rcu.a 00:02:59.577 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.577 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.577 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.577 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.577 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.835 [104/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.835 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.835 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.835 [107/707] Linking static target lib/librte_net.a 00:02:59.835 [108/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.835 [109/707] Linking static target lib/librte_mbuf.a 00:02:59.835 [110/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.835 [111/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.835 [112/707] Linking static target lib/librte_meter.a 00:03:00.094 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.094 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.094 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.094 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.094 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.094 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.353 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.612 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.612 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.870 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.870 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.870 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.870 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.870 [126/707] Linking static target lib/librte_pci.a 00:03:01.129 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.129 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.129 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.129 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.129 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.129 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.129 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.129 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.129 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.129 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.129 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.129 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.387 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.387 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.387 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.387 [142/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.387 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.387 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.387 [145/707] Linking static target lib/librte_cmdline.a 00:03:01.646 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:01.646 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:01.646 [148/707] Linking static target lib/librte_metrics.a 00:03:01.646 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.904 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.904 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.163 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.163 [153/707] Linking static target lib/librte_timer.a 00:03:02.163 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.163 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:02.421 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:02.679 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:02.679 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:02.937 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:02.937 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:02.937 [163/707] Linking static target lib/librte_bitratestats.a 00:03:03.195 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:03.195 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.195 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:03.195 [167/707] Linking static target lib/librte_bbdev.a 00:03:03.195 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:03.454 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.454 [170/707] Linking static target lib/librte_hash.a 00:03:03.454 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:03.454 [172/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.454 [173/707] Linking static target lib/librte_ethdev.a 00:03:03.713 [174/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:03.713 [175/707] Linking static target lib/acl/libavx2_tmp.a 00:03:03.713 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:03.713 [177/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:03.713 [178/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.970 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:03.971 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:03.971 [181/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.971 [182/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:04.229 [183/707] Linking static target lib/librte_cfgfile.a 00:03:04.229 [184/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:04.229 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:04.487 [186/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.487 [187/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.487 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:04.487 [189/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.487 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:04.487 [191/707] Linking static target lib/librte_bpf.a 00:03:04.746 [192/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.746 [193/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:04.746 [194/707] Linking static target lib/librte_compressdev.a 00:03:04.746 [195/707] Linking static target lib/librte_acl.a 00:03:04.746 [196/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:05.004 [197/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.004 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:05.004 [199/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:05.004 [200/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.004 [201/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:05.004 [202/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:05.263 [203/707] Linking static target lib/librte_distributor.a 00:03:05.263 [204/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.263 [205/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.263 [206/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.263 [207/707] Linking target lib/librte_eal.so.24.0 00:03:05.263 [208/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:05.263 [209/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.521 [210/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:05.521 [211/707] Linking target lib/librte_ring.so.24.0 00:03:05.521 [212/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.521 [213/707] Linking target lib/librte_meter.so.24.0 00:03:05.521 [214/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:05.521 [215/707] Linking target lib/librte_rcu.so.24.0 00:03:05.779 [216/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:05.779 [217/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:05.779 [218/707] Linking target lib/librte_mempool.so.24.0 00:03:05.779 [219/707] Linking target lib/librte_pci.so.24.0 00:03:05.779 [220/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:05.779 [221/707] Linking target lib/librte_timer.so.24.0 00:03:05.779 [222/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:05.779 [223/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:05.779 [224/707] Linking target lib/librte_mbuf.so.24.0 00:03:05.779 [225/707] Linking target lib/librte_acl.so.24.0 00:03:05.779 [226/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:06.037 [227/707] Linking target lib/librte_cfgfile.so.24.0 00:03:06.037 [228/707] Linking static target lib/librte_dmadev.a 00:03:06.037 [229/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:06.037 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:06.037 [231/707] Linking target lib/librte_net.so.24.0 00:03:06.037 [232/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:06.037 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:06.037 [234/707] Linking target lib/librte_bbdev.so.24.0 00:03:06.037 [235/707] Linking static target lib/librte_efd.a 00:03:06.037 [236/707] Linking target lib/librte_compressdev.so.24.0 00:03:06.037 [237/707] Linking target lib/librte_distributor.so.24.0 00:03:06.037 [238/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:06.037 [239/707] Linking target lib/librte_cmdline.so.24.0 00:03:06.316 [240/707] Linking target lib/librte_hash.so.24.0 00:03:06.316 [241/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.316 [242/707] Linking static target lib/librte_cryptodev.a 00:03:06.316 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:06.316 [244/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.316 [245/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.316 [246/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:06.316 [247/707] Linking target lib/librte_dmadev.so.24.0 00:03:06.316 [248/707] Linking target lib/librte_efd.so.24.0 00:03:06.573 [249/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:06.573 [250/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:06.573 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:06.833 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:06.833 [253/707] Linking static target lib/librte_dispatcher.a 00:03:06.833 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:07.091 [255/707] Linking static target lib/librte_gpudev.a 00:03:07.091 [256/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:07.091 [257/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:07.091 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:07.091 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:07.091 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.350 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.350 [262/707] Linking target lib/librte_cryptodev.so.24.0 00:03:07.609 [263/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:07.609 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:07.609 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:07.609 [266/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:07.609 [267/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:07.609 [268/707] Linking static target lib/librte_gro.a 00:03:07.609 [269/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:07.609 [270/707] Linking static target lib/librte_eventdev.a 00:03:07.609 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:07.609 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:07.870 [273/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.870 [274/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.870 [275/707] Linking target lib/librte_gpudev.so.24.0 00:03:07.870 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:07.870 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:07.870 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:07.870 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:07.870 [280/707] Linking static target lib/librte_gso.a 00:03:08.128 [281/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.128 [282/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:08.128 [283/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:08.128 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:08.386 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:08.386 [286/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:08.386 [287/707] Linking static target lib/librte_jobstats.a 00:03:08.387 [288/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.387 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:08.387 [290/707] Linking target lib/librte_ethdev.so.24.0 00:03:08.387 [291/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:08.387 [292/707] Linking static target lib/librte_ip_frag.a 00:03:08.645 [293/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:08.645 [294/707] Linking target lib/librte_metrics.so.24.0 00:03:08.645 [295/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.645 [296/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:08.645 [297/707] Linking target lib/librte_gro.so.24.0 00:03:08.645 [298/707] Linking target lib/librte_bpf.so.24.0 00:03:08.645 [299/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:08.645 [300/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:08.645 [301/707] Linking target lib/librte_gso.so.24.0 00:03:08.645 [302/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:08.645 [303/707] Linking static target lib/librte_latencystats.a 00:03:08.645 [304/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:08.645 [305/707] Linking target lib/librte_jobstats.so.24.0 00:03:08.645 [306/707] Linking target lib/librte_bitratestats.so.24.0 00:03:08.645 [307/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:08.645 [308/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.645 [309/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:08.904 [310/707] Linking target lib/librte_ip_frag.so.24.0 00:03:08.904 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.904 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:08.904 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.904 [314/707] Linking target lib/librte_latencystats.so.24.0 00:03:08.904 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:09.163 [316/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:09.163 [317/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:09.163 [318/707] Linking static target lib/librte_lpm.a 00:03:09.163 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:09.163 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:09.423 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:09.423 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:09.423 [323/707] Linking static target lib/librte_pcapng.a 00:03:09.423 [324/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.423 [325/707] Linking target lib/librte_lpm.so.24.0 00:03:09.423 [326/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:09.423 [327/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:09.423 [328/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:09.423 [329/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:09.682 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:09.682 [331/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.682 [332/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.682 [333/707] Linking target lib/librte_pcapng.so.24.0 00:03:09.682 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:09.682 [335/707] Linking target lib/librte_eventdev.so.24.0 00:03:09.682 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:09.682 [337/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:09.682 [338/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:09.942 [339/707] Linking target lib/librte_dispatcher.so.24.0 00:03:09.942 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:09.942 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:09.942 [342/707] Linking static target lib/librte_power.a 00:03:09.942 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:09.942 [344/707] Linking static target lib/librte_regexdev.a 00:03:09.942 [345/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:09.942 [346/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:09.942 [347/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:09.942 [348/707] Linking static target lib/librte_rawdev.a 00:03:10.202 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:10.202 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:10.202 [351/707] Linking static target lib/librte_mldev.a 00:03:10.461 [352/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:10.461 [353/707] Linking static target lib/librte_member.a 00:03:10.461 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:10.461 [355/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:10.461 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.461 [357/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:10.461 [358/707] Linking target lib/librte_rawdev.so.24.0 00:03:10.461 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.461 [360/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.461 [361/707] Linking static target lib/librte_reorder.a 00:03:10.461 [362/707] Linking target lib/librte_power.so.24.0 00:03:10.721 [363/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.721 [364/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:10.721 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.721 [366/707] Linking static target lib/librte_rib.a 00:03:10.721 [367/707] Linking target lib/librte_member.so.24.0 00:03:10.721 [368/707] Linking target lib/librte_regexdev.so.24.0 00:03:10.721 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:10.721 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.721 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:10.980 [372/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.980 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:10.980 [374/707] Linking target lib/librte_reorder.so.24.0 00:03:10.980 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:10.980 [376/707] Linking static target lib/librte_stack.a 00:03:10.980 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:10.980 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:10.980 [379/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.980 [380/707] Linking static target lib/librte_security.a 00:03:10.980 [381/707] Linking target lib/librte_rib.so.24.0 00:03:11.239 [382/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.239 [383/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:11.239 [384/707] Linking target lib/librte_stack.so.24.0 00:03:11.239 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.239 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.499 [387/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:11.499 [388/707] Linking static target lib/librte_sched.a 00:03:11.499 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.499 [390/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.499 [391/707] Linking target lib/librte_security.so.24.0 00:03:11.499 [392/707] Linking target lib/librte_mldev.so.24.0 00:03:11.499 [393/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.499 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:11.758 [395/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.758 [396/707] Linking target lib/librte_sched.so.24.0 00:03:11.758 [397/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:11.758 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:12.016 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:12.016 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:12.016 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:12.274 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:12.274 [403/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:12.274 [404/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.532 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:12.532 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:12.532 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:12.532 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:12.791 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:12.791 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:12.791 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:12.791 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:13.050 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:13.050 [414/707] Linking static target lib/librte_ipsec.a 00:03:13.050 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:13.357 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.357 [417/707] Linking target lib/librte_ipsec.so.24.0 00:03:13.357 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:13.357 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:13.357 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:13.357 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:13.659 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:13.660 [423/707] Linking static target lib/librte_fib.a 00:03:13.660 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:13.660 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:13.660 [426/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.660 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:13.918 [428/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:13.918 [429/707] Linking target lib/librte_fib.so.24.0 00:03:13.918 [430/707] Linking static target lib/librte_pdcp.a 00:03:13.918 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:13.918 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:14.211 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.211 [434/707] Linking target lib/librte_pdcp.so.24.0 00:03:14.470 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:14.470 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:14.470 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:14.470 [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:14.470 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:14.470 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:14.729 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:14.729 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:14.986 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:14.986 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:14.986 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:14.986 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:14.986 [447/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:14.986 [448/707] Linking static target lib/librte_port.a 00:03:15.245 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:15.245 [450/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:15.245 [451/707] Linking static target lib/librte_pdump.a 00:03:15.245 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:15.245 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:15.504 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.504 [455/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.504 [456/707] Linking target lib/librte_pdump.so.24.0 00:03:15.504 [457/707] Linking target lib/librte_port.so.24.0 00:03:15.504 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:15.763 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:15.763 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:15.763 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:15.763 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:16.022 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:16.022 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:16.022 [465/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:16.022 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:16.281 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:16.281 [468/707] Linking static target lib/librte_table.a 00:03:16.281 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:16.540 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:16.540 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:16.800 [472/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:16.800 [473/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.800 [474/707] Linking target lib/librte_table.so.24.0 00:03:16.800 [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:16.800 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:16.800 [477/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:16.800 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:17.059 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:17.059 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:17.318 [481/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:17.318 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:17.318 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:17.578 [484/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:17.578 [485/707] Linking static target lib/librte_graph.a 00:03:17.578 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:17.578 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:17.578 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:17.836 [489/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:17.836 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:18.094 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.094 [492/707] Linking target lib/librte_graph.so.24.0 00:03:18.094 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:18.094 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:18.353 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:18.353 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:18.353 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:18.611 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:18.611 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:18.611 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:18.611 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.611 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:18.611 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:18.870 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:18.870 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:19.129 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.129 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.129 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:19.129 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.129 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.129 [511/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:19.129 [512/707] Linking static target lib/librte_node.a 00:03:19.388 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.388 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.388 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.388 [516/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.388 [517/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.388 [518/707] Linking target lib/librte_node.so.24.0 00:03:19.654 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.654 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.654 [521/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.654 [522/707] Linking static target drivers/librte_bus_pci.a 00:03:19.654 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.654 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.654 [525/707] Linking static target drivers/librte_bus_vdev.a 00:03:19.654 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.654 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.912 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.912 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.912 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.912 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:19.912 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.912 [533/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:19.912 [534/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.912 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.169 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:20.169 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.169 [538/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:20.169 [539/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.169 [540/707] Linking static target drivers/librte_mempool_ring.a 00:03:20.169 [541/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.169 [542/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:20.169 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:20.427 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:20.685 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:20.685 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:20.685 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:21.250 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:21.250 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:21.508 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:21.508 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:21.766 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:21.766 [553/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:21.766 [554/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:21.766 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:22.026 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:22.026 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:22.026 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:22.284 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:22.284 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:22.542 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:22.543 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:22.543 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:22.801 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:23.059 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:23.059 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:23.059 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:23.059 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:23.059 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:23.059 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:23.318 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:23.318 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:23.318 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:23.318 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:23.576 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:23.835 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:23.835 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:23.835 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:23.835 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:23.835 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:23.835 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:24.095 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:24.095 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:24.095 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:24.095 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.355 [586/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:24.355 [587/707] Linking static target drivers/librte_net_i40e.a 00:03:24.355 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.355 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:24.355 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:24.615 [591/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.615 [592/707] Linking static target lib/librte_vhost.a 00:03:24.874 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:24.874 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:24.874 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:24.874 [596/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.874 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:24.874 [598/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:25.133 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:25.133 [600/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:25.133 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:25.393 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:25.393 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:25.393 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:25.652 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:25.652 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:25.652 [607/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.652 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:25.652 [609/707] Linking target lib/librte_vhost.so.24.0 00:03:25.652 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:25.910 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:25.910 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:25.910 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:25.910 [614/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:25.910 [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:26.169 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:26.169 [617/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.428 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:26.428 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:26.428 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:26.996 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:26.996 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:26.996 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:26.996 [624/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:27.255 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.255 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.255 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.255 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:27.255 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.515 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:27.515 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.515 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:27.515 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:27.773 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:27.773 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:27.773 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:27.773 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.032 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:28.032 [639/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.032 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:28.032 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:28.291 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:28.291 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:28.291 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:28.549 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:28.549 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:28.549 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:28.549 [648/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.549 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:28.549 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:28.807 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:28.807 [652/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:28.807 [653/707] Linking static target lib/librte_pipeline.a 00:03:28.807 [654/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.064 [655/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:29.064 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:29.064 [657/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:29.064 [658/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:29.322 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:29.322 [660/707] Linking target app/dpdk-dumpcap 00:03:29.322 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:29.580 [662/707] Linking target app/dpdk-graph 00:03:29.580 [663/707] Linking target app/dpdk-pdump 00:03:29.580 [664/707] Linking target app/dpdk-proc-info 00:03:29.580 [665/707] Linking target app/dpdk-test-acl 00:03:29.838 [666/707] Linking target app/dpdk-test-cmdline 00:03:29.838 [667/707] Linking target app/dpdk-test-bbdev 00:03:29.838 [668/707] Linking target app/dpdk-test-compress-perf 00:03:29.838 [669/707] Linking target app/dpdk-test-crypto-perf 00:03:30.096 [670/707] Linking target app/dpdk-test-dma-perf 00:03:30.096 [671/707] Linking target app/dpdk-test-eventdev 00:03:30.096 [672/707] Linking target app/dpdk-test-fib 00:03:30.096 [673/707] Linking target app/dpdk-test-gpudev 00:03:30.354 [674/707] Linking target app/dpdk-test-flow-perf 00:03:30.354 [675/707] Linking target app/dpdk-test-pipeline 00:03:30.354 [676/707] Linking target app/dpdk-test-mldev 00:03:30.354 [677/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:30.354 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:30.612 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:30.870 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:30.870 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:30.870 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:30.870 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:30.870 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:31.437 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:31.437 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:31.437 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:31.437 [688/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.437 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:31.437 [690/707] Linking target lib/librte_pipeline.so.24.0 00:03:31.696 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:31.696 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:31.696 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:31.954 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:31.954 [695/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:32.214 [696/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:32.214 [697/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:32.214 [698/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:32.473 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:32.473 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:32.473 [701/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:32.473 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:32.473 [703/707] Linking target app/dpdk-test-regex 00:03:32.473 [704/707] Linking target app/dpdk-test-sad 00:03:32.732 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:32.991 [706/707] Linking target app/dpdk-testpmd 00:03:33.250 [707/707] Linking target app/dpdk-test-security-perf 00:03:33.250 01:13:18 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:33.250 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:33.250 [0/1] Installing files. 00:03:33.513 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.513 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.515 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.516 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.517 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.517 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.517 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.518 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.090 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.090 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.090 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.090 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:34.090 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.091 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.092 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.093 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.093 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:34.093 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:34.093 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:34.093 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:34.093 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:34.093 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:34.093 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:34.093 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:34.093 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:34.093 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:34.093 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:34.093 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:34.093 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:34.093 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:34.093 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:34.093 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:34.093 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:34.093 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:34.093 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:34.093 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:34.093 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:34.093 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:34.093 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:34.093 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:34.093 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:34.093 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:34.093 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:34.093 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:34.093 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:34.093 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:34.093 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:34.093 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:34.093 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:34.093 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:34.093 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:34.093 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:34.093 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:34.093 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:34.093 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:34.093 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:34.093 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:34.093 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:34.094 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:34.094 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:34.094 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:34.094 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:34.094 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:34.094 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:34.094 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:34.094 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:34.094 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:34.094 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:34.094 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:34.094 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:34.094 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:34.094 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:34.094 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:34.094 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:34.094 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:34.094 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:34.094 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:34.094 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:34.094 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:34.094 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:34.094 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:34.094 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:34.094 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:34.094 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:34.094 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:34.094 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:34.094 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:34.094 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:34.094 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:34.094 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:34.094 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:34.094 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:34.094 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:34.094 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:34.094 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:34.094 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:34.094 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:34.094 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:34.094 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:34.094 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:34.094 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:34.094 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:34.094 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:34.094 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:34.094 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:34.094 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:34.094 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:34.094 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:34.094 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:34.094 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:34.094 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:34.094 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:34.094 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:34.094 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:34.094 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:34.094 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:34.094 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:34.094 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:34.094 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:34.094 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:34.094 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:34.094 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:34.094 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:34.094 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:34.094 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:34.094 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:34.094 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:34.094 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:34.094 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:34.094 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:34.094 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:34.094 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:34.094 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:34.094 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:34.094 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:34.094 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:34.094 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:34.094 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:34.094 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:34.094 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:34.094 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:34.094 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:34.094 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:34.094 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:34.094 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:34.094 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:34.094 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:34.094 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:34.094 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:34.095 01:13:19 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:34.095 01:13:19 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.095 01:13:19 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:34.095 01:13:19 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.095 ************************************ 00:03:34.095 END TEST build_native_dpdk 00:03:34.095 ************************************ 00:03:34.095 00:03:34.095 real 0m44.853s 00:03:34.095 user 4m54.756s 00:03:34.095 sys 1m5.222s 00:03:34.095 01:13:19 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:34.095 01:13:19 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.095 01:13:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.095 01:13:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.095 01:13:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:34.353 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:34.353 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.353 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:34.353 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:34.921 Using 'verbs' RDMA provider 00:03:51.235 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:09.315 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:09.315 Creating mk/config.mk...done. 00:04:09.315 Creating mk/cc.flags.mk...done. 00:04:09.315 Type 'make' to build. 00:04:09.315 01:13:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:09.315 01:13:52 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:04:09.315 01:13:52 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:04:09.315 01:13:52 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.315 ************************************ 00:04:09.315 START TEST make 00:04:09.315 ************************************ 00:04:09.315 01:13:52 make -- common/autotest_common.sh@1121 -- $ make -j10 00:04:09.315 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:09.315 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:09.315 meson setup builddir \ 00:04:09.315 -Dwith-libaio=enabled \ 00:04:09.315 -Dwith-liburing=enabled \ 00:04:09.315 -Dwith-libvfn=disabled \ 00:04:09.315 -Dwith-spdk=false && \ 00:04:09.315 meson compile -C builddir && \ 00:04:09.315 cd -) 00:04:09.315 make[1]: Nothing to be done for 'all'. 00:04:09.574 The Meson build system 00:04:09.574 Version: 1.3.1 00:04:09.574 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:09.574 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:09.574 Build type: native build 00:04:09.574 Project name: xnvme 00:04:09.574 Project version: 0.7.3 00:04:09.574 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:09.574 C linker for the host machine: gcc ld.bfd 2.39-16 00:04:09.574 Host machine cpu family: x86_64 00:04:09.574 Host machine cpu: x86_64 00:04:09.574 Message: host_machine.system: linux 00:04:09.574 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:09.574 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:09.574 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:09.574 Run-time dependency threads found: YES 00:04:09.574 Has header "setupapi.h" : NO 00:04:09.574 Has header "linux/blkzoned.h" : YES 00:04:09.574 Has header "linux/blkzoned.h" : YES (cached) 00:04:09.574 Has header "libaio.h" : YES 00:04:09.574 Library aio found: YES 00:04:09.574 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:09.574 Run-time dependency liburing found: YES 2.2 00:04:09.574 Dependency libvfn skipped: feature with-libvfn disabled 00:04:09.574 Run-time dependency appleframeworks found: NO (tried framework) 00:04:09.574 Run-time dependency appleframeworks found: NO (tried framework) 00:04:09.574 Configuring xnvme_config.h using configuration 00:04:09.574 Configuring xnvme.spec using configuration 00:04:09.574 Run-time dependency bash-completion found: YES 2.11 00:04:09.574 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:09.574 Program cp found: YES (/usr/bin/cp) 00:04:09.574 Has header "winsock2.h" : NO 00:04:09.574 Has header "dbghelp.h" : NO 00:04:09.574 Library rpcrt4 found: NO 00:04:09.574 Library rt found: YES 00:04:09.574 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:09.574 Found CMake: /usr/bin/cmake (3.27.7) 00:04:09.574 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:09.574 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:09.574 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:09.574 Build targets in project: 32 00:04:09.574 00:04:09.574 xnvme 0.7.3 00:04:09.574 00:04:09.574 User defined options 00:04:09.574 with-libaio : enabled 00:04:09.574 with-liburing: enabled 00:04:09.574 with-libvfn : disabled 00:04:09.574 with-spdk : false 00:04:09.574 00:04:09.574 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:09.832 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:10.090 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:10.090 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:10.090 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:10.090 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:10.090 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:10.090 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:10.090 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:10.090 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:10.090 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:10.090 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:10.090 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:10.090 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:10.090 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:10.090 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:10.090 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:10.090 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:10.348 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:10.348 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:10.348 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:10.348 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:10.348 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:10.348 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:10.348 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:10.348 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:10.348 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:10.348 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:10.348 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:10.348 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:10.348 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:10.348 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:10.348 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:10.348 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:10.348 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:10.348 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:10.348 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:10.348 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:10.348 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:10.348 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:10.348 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:10.348 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:10.348 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:10.348 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:10.348 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:10.348 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:10.348 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:10.348 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:10.348 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:10.348 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:10.348 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:10.348 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:10.348 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:10.348 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:10.607 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:10.607 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:10.607 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:10.607 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:10.607 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:10.607 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:10.607 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:10.607 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:10.607 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:10.607 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:10.607 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:10.607 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:10.607 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:10.607 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:10.607 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:10.607 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:10.607 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:10.865 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:10.865 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:10.865 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:10.865 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:10.865 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:10.865 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:10.865 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:10.865 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:10.865 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:10.865 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:10.865 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:10.865 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:10.865 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:10.865 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:10.865 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:10.865 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:10.865 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:10.865 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:10.865 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:10.865 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:10.865 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:10.865 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:11.123 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:11.123 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:11.123 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:11.123 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:11.123 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:11.123 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:11.123 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:11.123 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:11.123 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:11.123 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:11.123 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:11.123 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:11.124 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:11.124 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:11.124 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:11.124 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:11.124 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:11.124 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:11.124 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:11.124 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:11.124 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:11.124 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:11.124 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:11.124 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:11.124 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:11.124 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:11.124 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:11.124 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:11.124 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:11.124 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:11.124 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:11.382 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:11.382 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:11.382 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:11.382 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:11.382 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:11.382 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:11.382 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:11.382 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:11.382 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:11.382 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:11.382 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:11.382 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:11.382 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:11.382 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:11.382 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:11.382 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:11.382 [139/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:11.382 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:11.382 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:11.639 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:11.639 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:11.639 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:11.639 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:11.639 [146/203] Linking target lib/libxnvme.so 00:04:11.639 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:11.639 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:11.639 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:11.639 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:11.639 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:11.639 [152/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:11.639 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:11.639 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:11.639 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:11.639 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:11.897 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:11.897 [158/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:11.897 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:11.897 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:11.897 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:11.897 [162/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:11.897 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:11.897 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:11.897 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:11.897 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:11.897 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:11.897 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:11.897 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:11.897 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:11.897 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:12.156 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:12.156 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:12.156 [174/203] Linking static target lib/libxnvme.a 00:04:12.156 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:12.156 [176/203] Linking target tests/xnvme_tests_buf 00:04:12.156 [177/203] Linking target tests/xnvme_tests_ioworker 00:04:12.156 [178/203] Linking target tests/xnvme_tests_cli 00:04:12.156 [179/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:12.156 [180/203] Linking target tests/xnvme_tests_enum 00:04:12.156 [181/203] Linking target tests/xnvme_tests_lblk 00:04:12.156 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:04:12.156 [183/203] Linking target tests/xnvme_tests_scc 00:04:12.156 [184/203] Linking target tests/xnvme_tests_znd_append 00:04:12.156 [185/203] Linking target tests/xnvme_tests_map 00:04:12.156 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:12.156 [187/203] Linking target tests/xnvme_tests_znd_state 00:04:12.156 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:12.156 [189/203] Linking target tools/lblk 00:04:12.156 [190/203] Linking target examples/xnvme_dev 00:04:12.156 [191/203] Linking target tests/xnvme_tests_kvs 00:04:12.156 [192/203] Linking target tools/xnvme 00:04:12.156 [193/203] Linking target tools/zoned 00:04:12.156 [194/203] Linking target tools/xdd 00:04:12.156 [195/203] Linking target tools/xnvme_file 00:04:12.156 [196/203] Linking target tools/kvs 00:04:12.156 [197/203] Linking target examples/xnvme_enum 00:04:12.156 [198/203] Linking target examples/xnvme_hello 00:04:12.156 [199/203] Linking target examples/xnvme_io_async 00:04:12.156 [200/203] Linking target examples/xnvme_single_sync 00:04:12.156 [201/203] Linking target examples/xnvme_single_async 00:04:12.156 [202/203] Linking target examples/zoned_io_sync 00:04:12.156 [203/203] Linking target examples/zoned_io_async 00:04:12.156 INFO: autodetecting backend as ninja 00:04:12.156 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:12.415 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:30.509 CC lib/log/log.o 00:04:30.509 CC lib/log/log_flags.o 00:04:30.509 CC lib/log/log_deprecated.o 00:04:30.509 CC lib/ut_mock/mock.o 00:04:30.509 CC lib/ut/ut.o 00:04:30.509 LIB libspdk_log.a 00:04:30.509 LIB libspdk_ut_mock.a 00:04:30.509 SO libspdk_log.so.7.0 00:04:30.509 LIB libspdk_ut.a 00:04:30.509 SO libspdk_ut_mock.so.6.0 00:04:30.509 SYMLINK libspdk_log.so 00:04:30.509 SO libspdk_ut.so.2.0 00:04:30.509 SYMLINK libspdk_ut_mock.so 00:04:30.509 SYMLINK libspdk_ut.so 00:04:30.509 CC lib/util/bit_array.o 00:04:30.509 CC lib/util/base64.o 00:04:30.509 CC lib/util/cpuset.o 00:04:30.509 CC lib/util/crc16.o 00:04:30.509 CC lib/dma/dma.o 00:04:30.509 CC lib/util/crc32.o 00:04:30.509 CC lib/util/crc32c.o 00:04:30.509 CC lib/ioat/ioat.o 00:04:30.509 CXX lib/trace_parser/trace.o 00:04:30.509 CC lib/util/crc32_ieee.o 00:04:30.509 CC lib/util/crc64.o 00:04:30.509 CC lib/vfio_user/host/vfio_user_pci.o 00:04:30.509 CC lib/util/dif.o 00:04:30.509 CC lib/util/fd.o 00:04:30.509 LIB libspdk_dma.a 00:04:30.509 CC lib/util/file.o 00:04:30.509 CC lib/util/hexlify.o 00:04:30.509 SO libspdk_dma.so.4.0 00:04:30.509 CC lib/util/iov.o 00:04:30.509 CC lib/util/math.o 00:04:30.509 SYMLINK libspdk_dma.so 00:04:30.509 CC lib/vfio_user/host/vfio_user.o 00:04:30.509 CC lib/util/pipe.o 00:04:30.509 LIB libspdk_ioat.a 00:04:30.509 CC lib/util/strerror_tls.o 00:04:30.509 SO libspdk_ioat.so.7.0 00:04:30.509 CC lib/util/string.o 00:04:30.509 CC lib/util/uuid.o 00:04:30.509 SYMLINK libspdk_ioat.so 00:04:30.509 CC lib/util/fd_group.o 00:04:30.509 CC lib/util/xor.o 00:04:30.509 CC lib/util/zipf.o 00:04:30.509 LIB libspdk_vfio_user.a 00:04:30.509 SO libspdk_vfio_user.so.5.0 00:04:30.509 SYMLINK libspdk_vfio_user.so 00:04:30.509 LIB libspdk_util.a 00:04:30.509 SO libspdk_util.so.9.0 00:04:30.509 LIB libspdk_trace_parser.a 00:04:30.509 SYMLINK libspdk_util.so 00:04:30.509 SO libspdk_trace_parser.so.5.0 00:04:30.509 SYMLINK libspdk_trace_parser.so 00:04:30.509 CC lib/rdma/rdma_verbs.o 00:04:30.509 CC lib/rdma/common.o 00:04:30.509 CC lib/idxd/idxd_user.o 00:04:30.509 CC lib/idxd/idxd.o 00:04:30.509 CC lib/idxd/idxd_kernel.o 00:04:30.509 CC lib/json/json_parse.o 00:04:30.509 CC lib/json/json_util.o 00:04:30.509 CC lib/env_dpdk/env.o 00:04:30.509 CC lib/conf/conf.o 00:04:30.509 CC lib/vmd/vmd.o 00:04:30.509 CC lib/vmd/led.o 00:04:30.509 CC lib/json/json_write.o 00:04:30.509 LIB libspdk_conf.a 00:04:30.509 CC lib/env_dpdk/memory.o 00:04:30.509 CC lib/env_dpdk/pci.o 00:04:30.509 CC lib/env_dpdk/init.o 00:04:30.509 SO libspdk_conf.so.6.0 00:04:30.509 LIB libspdk_rdma.a 00:04:30.509 CC lib/env_dpdk/threads.o 00:04:30.509 SYMLINK libspdk_conf.so 00:04:30.509 CC lib/env_dpdk/pci_ioat.o 00:04:30.509 SO libspdk_rdma.so.6.0 00:04:30.509 SYMLINK libspdk_rdma.so 00:04:30.509 CC lib/env_dpdk/pci_virtio.o 00:04:30.509 CC lib/env_dpdk/pci_vmd.o 00:04:30.509 CC lib/env_dpdk/pci_idxd.o 00:04:30.509 LIB libspdk_json.a 00:04:30.509 SO libspdk_json.so.6.0 00:04:30.509 CC lib/env_dpdk/pci_event.o 00:04:30.509 CC lib/env_dpdk/sigbus_handler.o 00:04:30.509 CC lib/env_dpdk/pci_dpdk.o 00:04:30.509 SYMLINK libspdk_json.so 00:04:30.509 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:30.509 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:30.509 LIB libspdk_idxd.a 00:04:30.509 SO libspdk_idxd.so.12.0 00:04:30.509 LIB libspdk_vmd.a 00:04:30.769 SYMLINK libspdk_idxd.so 00:04:30.769 SO libspdk_vmd.so.6.0 00:04:30.769 CC lib/jsonrpc/jsonrpc_server.o 00:04:30.769 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:30.769 CC lib/jsonrpc/jsonrpc_client.o 00:04:30.769 SYMLINK libspdk_vmd.so 00:04:30.769 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.029 LIB libspdk_jsonrpc.a 00:04:31.029 SO libspdk_jsonrpc.so.6.0 00:04:31.029 SYMLINK libspdk_jsonrpc.so 00:04:31.287 LIB libspdk_env_dpdk.a 00:04:31.545 SO libspdk_env_dpdk.so.14.0 00:04:31.545 CC lib/rpc/rpc.o 00:04:31.545 SYMLINK libspdk_env_dpdk.so 00:04:31.804 LIB libspdk_rpc.a 00:04:31.804 SO libspdk_rpc.so.6.0 00:04:31.804 SYMLINK libspdk_rpc.so 00:04:32.381 CC lib/keyring/keyring.o 00:04:32.381 CC lib/keyring/keyring_rpc.o 00:04:32.381 CC lib/trace/trace.o 00:04:32.381 CC lib/trace/trace_flags.o 00:04:32.381 CC lib/notify/notify.o 00:04:32.381 CC lib/notify/notify_rpc.o 00:04:32.381 CC lib/trace/trace_rpc.o 00:04:32.381 LIB libspdk_notify.a 00:04:32.640 SO libspdk_notify.so.6.0 00:04:32.640 LIB libspdk_keyring.a 00:04:32.640 LIB libspdk_trace.a 00:04:32.640 SO libspdk_keyring.so.1.0 00:04:32.640 SYMLINK libspdk_notify.so 00:04:32.640 SO libspdk_trace.so.10.0 00:04:32.640 SYMLINK libspdk_keyring.so 00:04:32.640 SYMLINK libspdk_trace.so 00:04:33.207 CC lib/sock/sock.o 00:04:33.207 CC lib/sock/sock_rpc.o 00:04:33.207 CC lib/thread/thread.o 00:04:33.207 CC lib/thread/iobuf.o 00:04:33.466 LIB libspdk_sock.a 00:04:33.466 SO libspdk_sock.so.9.0 00:04:33.723 SYMLINK libspdk_sock.so 00:04:33.981 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.981 CC lib/nvme/nvme_ctrlr.o 00:04:33.981 CC lib/nvme/nvme_fabric.o 00:04:33.981 CC lib/nvme/nvme_ns_cmd.o 00:04:33.981 CC lib/nvme/nvme_ns.o 00:04:33.981 CC lib/nvme/nvme_pcie_common.o 00:04:33.981 CC lib/nvme/nvme_pcie.o 00:04:33.981 CC lib/nvme/nvme.o 00:04:33.981 CC lib/nvme/nvme_qpair.o 00:04:34.547 LIB libspdk_thread.a 00:04:34.547 CC lib/nvme/nvme_quirks.o 00:04:34.547 CC lib/nvme/nvme_transport.o 00:04:34.547 SO libspdk_thread.so.10.0 00:04:34.805 CC lib/nvme/nvme_discovery.o 00:04:34.805 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.805 SYMLINK libspdk_thread.so 00:04:34.805 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.805 CC lib/nvme/nvme_tcp.o 00:04:34.805 CC lib/nvme/nvme_opal.o 00:04:34.805 CC lib/nvme/nvme_io_msg.o 00:04:35.063 CC lib/nvme/nvme_poll_group.o 00:04:35.063 CC lib/nvme/nvme_zns.o 00:04:35.063 CC lib/nvme/nvme_stubs.o 00:04:35.322 CC lib/nvme/nvme_auth.o 00:04:35.322 CC lib/nvme/nvme_cuse.o 00:04:35.322 CC lib/accel/accel.o 00:04:35.322 CC lib/accel/accel_rpc.o 00:04:35.322 CC lib/accel/accel_sw.o 00:04:35.581 CC lib/nvme/nvme_rdma.o 00:04:35.581 CC lib/blob/blobstore.o 00:04:35.581 CC lib/blob/request.o 00:04:35.848 CC lib/init/json_config.o 00:04:35.848 CC lib/virtio/virtio.o 00:04:35.848 CC lib/init/subsystem.o 00:04:36.107 CC lib/init/subsystem_rpc.o 00:04:36.107 CC lib/init/rpc.o 00:04:36.107 CC lib/virtio/virtio_vhost_user.o 00:04:36.107 CC lib/virtio/virtio_vfio_user.o 00:04:36.107 CC lib/virtio/virtio_pci.o 00:04:36.107 CC lib/blob/zeroes.o 00:04:36.107 CC lib/blob/blob_bs_dev.o 00:04:36.107 LIB libspdk_init.a 00:04:36.366 SO libspdk_init.so.5.0 00:04:36.366 LIB libspdk_accel.a 00:04:36.366 SYMLINK libspdk_init.so 00:04:36.366 SO libspdk_accel.so.15.0 00:04:36.366 LIB libspdk_virtio.a 00:04:36.366 SO libspdk_virtio.so.7.0 00:04:36.624 SYMLINK libspdk_accel.so 00:04:36.624 SYMLINK libspdk_virtio.so 00:04:36.624 CC lib/event/app.o 00:04:36.624 CC lib/event/log_rpc.o 00:04:36.624 CC lib/event/reactor.o 00:04:36.624 CC lib/event/app_rpc.o 00:04:36.624 CC lib/event/scheduler_static.o 00:04:36.883 LIB libspdk_nvme.a 00:04:36.883 CC lib/bdev/bdev.o 00:04:36.883 CC lib/bdev/bdev_rpc.o 00:04:36.883 CC lib/bdev/part.o 00:04:36.883 CC lib/bdev/bdev_zone.o 00:04:36.883 CC lib/bdev/scsi_nvme.o 00:04:37.141 SO libspdk_nvme.so.13.0 00:04:37.141 LIB libspdk_event.a 00:04:37.417 SO libspdk_event.so.13.0 00:04:37.417 SYMLINK libspdk_nvme.so 00:04:37.417 SYMLINK libspdk_event.so 00:04:39.403 LIB libspdk_blob.a 00:04:39.403 SO libspdk_blob.so.11.0 00:04:39.403 SYMLINK libspdk_blob.so 00:04:39.662 LIB libspdk_bdev.a 00:04:39.662 CC lib/lvol/lvol.o 00:04:39.662 CC lib/blobfs/blobfs.o 00:04:39.662 CC lib/blobfs/tree.o 00:04:39.662 SO libspdk_bdev.so.15.0 00:04:39.920 SYMLINK libspdk_bdev.so 00:04:39.920 CC lib/scsi/dev.o 00:04:39.920 CC lib/scsi/port.o 00:04:39.920 CC lib/scsi/scsi.o 00:04:39.920 CC lib/scsi/lun.o 00:04:39.920 CC lib/nbd/nbd.o 00:04:39.920 CC lib/nvmf/ctrlr.o 00:04:39.920 CC lib/ftl/ftl_core.o 00:04:39.920 CC lib/ublk/ublk.o 00:04:40.179 CC lib/ublk/ublk_rpc.o 00:04:40.179 CC lib/ftl/ftl_init.o 00:04:40.179 CC lib/ftl/ftl_layout.o 00:04:40.179 CC lib/scsi/scsi_bdev.o 00:04:40.438 CC lib/scsi/scsi_pr.o 00:04:40.438 CC lib/scsi/scsi_rpc.o 00:04:40.438 CC lib/nbd/nbd_rpc.o 00:04:40.438 CC lib/scsi/task.o 00:04:40.438 CC lib/nvmf/ctrlr_discovery.o 00:04:40.438 LIB libspdk_blobfs.a 00:04:40.438 CC lib/ftl/ftl_debug.o 00:04:40.438 SO libspdk_blobfs.so.10.0 00:04:40.438 LIB libspdk_nbd.a 00:04:40.696 SO libspdk_nbd.so.7.0 00:04:40.696 CC lib/nvmf/ctrlr_bdev.o 00:04:40.696 SYMLINK libspdk_blobfs.so 00:04:40.696 CC lib/ftl/ftl_io.o 00:04:40.696 LIB libspdk_lvol.a 00:04:40.696 CC lib/nvmf/subsystem.o 00:04:40.697 LIB libspdk_ublk.a 00:04:40.697 SYMLINK libspdk_nbd.so 00:04:40.697 CC lib/ftl/ftl_sb.o 00:04:40.697 SO libspdk_lvol.so.10.0 00:04:40.697 SO libspdk_ublk.so.3.0 00:04:40.697 SYMLINK libspdk_lvol.so 00:04:40.697 CC lib/ftl/ftl_l2p.o 00:04:40.697 CC lib/nvmf/nvmf.o 00:04:40.697 SYMLINK libspdk_ublk.so 00:04:40.697 CC lib/ftl/ftl_l2p_flat.o 00:04:40.697 LIB libspdk_scsi.a 00:04:40.955 CC lib/ftl/ftl_nv_cache.o 00:04:40.955 CC lib/nvmf/nvmf_rpc.o 00:04:40.955 SO libspdk_scsi.so.9.0 00:04:40.955 CC lib/nvmf/transport.o 00:04:40.955 CC lib/ftl/ftl_band.o 00:04:40.955 CC lib/nvmf/tcp.o 00:04:40.955 SYMLINK libspdk_scsi.so 00:04:40.955 CC lib/nvmf/stubs.o 00:04:41.213 CC lib/nvmf/mdns_server.o 00:04:41.486 CC lib/iscsi/conn.o 00:04:41.486 CC lib/vhost/vhost.o 00:04:41.486 CC lib/iscsi/init_grp.o 00:04:41.745 CC lib/nvmf/rdma.o 00:04:41.745 CC lib/ftl/ftl_band_ops.o 00:04:41.745 CC lib/ftl/ftl_writer.o 00:04:41.745 CC lib/ftl/ftl_rq.o 00:04:41.745 CC lib/ftl/ftl_reloc.o 00:04:41.745 CC lib/ftl/ftl_l2p_cache.o 00:04:42.003 CC lib/nvmf/auth.o 00:04:42.003 CC lib/ftl/ftl_p2l.o 00:04:42.003 CC lib/vhost/vhost_rpc.o 00:04:42.003 CC lib/iscsi/iscsi.o 00:04:42.003 CC lib/iscsi/md5.o 00:04:42.003 CC lib/iscsi/param.o 00:04:42.261 CC lib/ftl/mngt/ftl_mngt.o 00:04:42.261 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:42.261 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:42.261 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:42.519 CC lib/vhost/vhost_scsi.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:42.519 CC lib/iscsi/portal_grp.o 00:04:42.519 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.519 CC lib/vhost/vhost_blk.o 00:04:42.519 CC lib/vhost/rte_vhost_user.o 00:04:42.777 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.777 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.777 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.777 CC lib/iscsi/tgt_node.o 00:04:42.777 CC lib/iscsi/iscsi_subsystem.o 00:04:42.777 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:43.035 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:43.035 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:43.035 CC lib/ftl/utils/ftl_conf.o 00:04:43.293 CC lib/iscsi/iscsi_rpc.o 00:04:43.293 CC lib/ftl/utils/ftl_md.o 00:04:43.293 CC lib/ftl/utils/ftl_mempool.o 00:04:43.293 CC lib/ftl/utils/ftl_bitmap.o 00:04:43.293 CC lib/ftl/utils/ftl_property.o 00:04:43.293 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:43.293 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:43.293 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:43.551 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:43.551 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:43.551 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:43.551 LIB libspdk_vhost.a 00:04:43.551 CC lib/iscsi/task.o 00:04:43.551 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:43.551 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:43.551 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:43.551 SO libspdk_vhost.so.8.0 00:04:43.551 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:43.551 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:43.551 CC lib/ftl/base/ftl_base_dev.o 00:04:43.809 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.809 SYMLINK libspdk_vhost.so 00:04:43.809 LIB libspdk_iscsi.a 00:04:43.809 CC lib/ftl/ftl_trace.o 00:04:43.809 SO libspdk_iscsi.so.8.0 00:04:44.067 LIB libspdk_nvmf.a 00:04:44.067 LIB libspdk_ftl.a 00:04:44.067 SYMLINK libspdk_iscsi.so 00:04:44.067 SO libspdk_nvmf.so.18.0 00:04:44.325 SO libspdk_ftl.so.9.0 00:04:44.325 SYMLINK libspdk_nvmf.so 00:04:44.583 SYMLINK libspdk_ftl.so 00:04:45.150 CC module/env_dpdk/env_dpdk_rpc.o 00:04:45.150 CC module/accel/ioat/accel_ioat.o 00:04:45.150 CC module/accel/dsa/accel_dsa.o 00:04:45.150 CC module/accel/iaa/accel_iaa.o 00:04:45.150 CC module/blob/bdev/blob_bdev.o 00:04:45.150 CC module/accel/error/accel_error.o 00:04:45.150 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:45.150 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:45.150 CC module/sock/posix/posix.o 00:04:45.150 CC module/keyring/file/keyring.o 00:04:45.150 LIB libspdk_env_dpdk_rpc.a 00:04:45.150 SO libspdk_env_dpdk_rpc.so.6.0 00:04:45.409 SYMLINK libspdk_env_dpdk_rpc.so 00:04:45.409 CC module/accel/ioat/accel_ioat_rpc.o 00:04:45.409 LIB libspdk_scheduler_dpdk_governor.a 00:04:45.409 CC module/keyring/file/keyring_rpc.o 00:04:45.409 CC module/accel/error/accel_error_rpc.o 00:04:45.409 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:45.409 LIB libspdk_scheduler_dynamic.a 00:04:45.409 CC module/accel/iaa/accel_iaa_rpc.o 00:04:45.409 SO libspdk_scheduler_dynamic.so.4.0 00:04:45.409 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:45.409 LIB libspdk_accel_ioat.a 00:04:45.409 CC module/accel/dsa/accel_dsa_rpc.o 00:04:45.409 LIB libspdk_blob_bdev.a 00:04:45.409 SYMLINK libspdk_scheduler_dynamic.so 00:04:45.410 LIB libspdk_keyring_file.a 00:04:45.410 SO libspdk_accel_ioat.so.6.0 00:04:45.410 SO libspdk_blob_bdev.so.11.0 00:04:45.410 LIB libspdk_accel_error.a 00:04:45.410 SO libspdk_keyring_file.so.1.0 00:04:45.410 LIB libspdk_accel_iaa.a 00:04:45.410 CC module/scheduler/gscheduler/gscheduler.o 00:04:45.410 SO libspdk_accel_error.so.2.0 00:04:45.410 SYMLINK libspdk_accel_ioat.so 00:04:45.410 SYMLINK libspdk_blob_bdev.so 00:04:45.410 SO libspdk_accel_iaa.so.3.0 00:04:45.671 SYMLINK libspdk_keyring_file.so 00:04:45.671 LIB libspdk_accel_dsa.a 00:04:45.671 SYMLINK libspdk_accel_error.so 00:04:45.671 SYMLINK libspdk_accel_iaa.so 00:04:45.671 SO libspdk_accel_dsa.so.5.0 00:04:45.671 CC module/keyring/linux/keyring.o 00:04:45.671 CC module/keyring/linux/keyring_rpc.o 00:04:45.671 LIB libspdk_scheduler_gscheduler.a 00:04:45.671 SYMLINK libspdk_accel_dsa.so 00:04:45.671 SO libspdk_scheduler_gscheduler.so.4.0 00:04:45.671 LIB libspdk_keyring_linux.a 00:04:45.671 SYMLINK libspdk_scheduler_gscheduler.so 00:04:45.671 SO libspdk_keyring_linux.so.1.0 00:04:45.929 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.930 CC module/bdev/lvol/vbdev_lvol.o 00:04:45.930 CC module/bdev/error/vbdev_error.o 00:04:45.930 CC module/bdev/delay/vbdev_delay.o 00:04:45.930 CC module/bdev/gpt/gpt.o 00:04:45.930 CC module/bdev/malloc/bdev_malloc.o 00:04:45.930 SYMLINK libspdk_keyring_linux.so 00:04:45.930 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.930 CC module/bdev/null/bdev_null.o 00:04:45.930 CC module/bdev/nvme/bdev_nvme.o 00:04:45.930 LIB libspdk_sock_posix.a 00:04:45.930 SO libspdk_sock_posix.so.6.0 00:04:45.930 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.930 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.930 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.930 SYMLINK libspdk_sock_posix.so 00:04:46.188 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:46.188 CC module/bdev/error/vbdev_error_rpc.o 00:04:46.188 LIB libspdk_blobfs_bdev.a 00:04:46.188 CC module/bdev/null/bdev_null_rpc.o 00:04:46.188 SO libspdk_blobfs_bdev.so.6.0 00:04:46.188 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:46.188 LIB libspdk_bdev_malloc.a 00:04:46.188 LIB libspdk_bdev_gpt.a 00:04:46.188 SYMLINK libspdk_blobfs_bdev.so 00:04:46.188 LIB libspdk_bdev_error.a 00:04:46.188 SO libspdk_bdev_malloc.so.6.0 00:04:46.188 SO libspdk_bdev_gpt.so.6.0 00:04:46.188 SO libspdk_bdev_error.so.6.0 00:04:46.447 LIB libspdk_bdev_null.a 00:04:46.447 SYMLINK libspdk_bdev_malloc.so 00:04:46.447 SYMLINK libspdk_bdev_gpt.so 00:04:46.447 LIB libspdk_bdev_delay.a 00:04:46.447 SYMLINK libspdk_bdev_error.so 00:04:46.447 CC module/bdev/nvme/nvme_rpc.o 00:04:46.447 SO libspdk_bdev_null.so.6.0 00:04:46.447 SO libspdk_bdev_delay.so.6.0 00:04:46.447 CC module/bdev/passthru/vbdev_passthru.o 00:04:46.447 SYMLINK libspdk_bdev_null.so 00:04:46.447 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.447 LIB libspdk_bdev_lvol.a 00:04:46.447 SYMLINK libspdk_bdev_delay.so 00:04:46.447 CC module/bdev/nvme/vbdev_opal.o 00:04:46.447 SO libspdk_bdev_lvol.so.6.0 00:04:46.447 CC module/bdev/split/vbdev_split.o 00:04:46.447 CC module/bdev/raid/bdev_raid.o 00:04:46.447 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:46.706 SYMLINK libspdk_bdev_lvol.so 00:04:46.706 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.706 CC module/bdev/raid/bdev_raid_rpc.o 00:04:46.706 CC module/bdev/split/vbdev_split_rpc.o 00:04:46.706 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.706 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.706 LIB libspdk_bdev_passthru.a 00:04:46.706 LIB libspdk_bdev_split.a 00:04:46.706 SO libspdk_bdev_passthru.so.6.0 00:04:46.965 SO libspdk_bdev_split.so.6.0 00:04:46.965 CC module/bdev/raid/bdev_raid_sb.o 00:04:46.965 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.965 SYMLINK libspdk_bdev_passthru.so 00:04:46.965 SYMLINK libspdk_bdev_split.so 00:04:46.965 CC module/bdev/raid/raid0.o 00:04:46.965 CC module/bdev/xnvme/bdev_xnvme.o 00:04:46.965 CC module/bdev/aio/bdev_aio.o 00:04:46.965 CC module/bdev/ftl/bdev_ftl.o 00:04:46.965 LIB libspdk_bdev_zone_block.a 00:04:46.965 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.965 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:47.224 SO libspdk_bdev_zone_block.so.6.0 00:04:47.224 CC module/bdev/raid/raid1.o 00:04:47.224 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:47.224 SYMLINK libspdk_bdev_zone_block.so 00:04:47.224 CC module/bdev/aio/bdev_aio_rpc.o 00:04:47.224 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:47.224 CC module/bdev/raid/concat.o 00:04:47.224 LIB libspdk_bdev_aio.a 00:04:47.224 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:47.224 LIB libspdk_bdev_xnvme.a 00:04:47.483 SO libspdk_bdev_aio.so.6.0 00:04:47.483 SO libspdk_bdev_xnvme.so.3.0 00:04:47.483 LIB libspdk_bdev_ftl.a 00:04:47.483 SO libspdk_bdev_ftl.so.6.0 00:04:47.483 SYMLINK libspdk_bdev_aio.so 00:04:47.483 SYMLINK libspdk_bdev_xnvme.so 00:04:47.483 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:47.483 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:47.483 LIB libspdk_bdev_iscsi.a 00:04:47.483 SYMLINK libspdk_bdev_ftl.so 00:04:47.483 SO libspdk_bdev_iscsi.so.6.0 00:04:47.483 LIB libspdk_bdev_raid.a 00:04:47.744 SYMLINK libspdk_bdev_iscsi.so 00:04:47.745 SO libspdk_bdev_raid.so.6.0 00:04:47.745 SYMLINK libspdk_bdev_raid.so 00:04:47.745 LIB libspdk_bdev_virtio.a 00:04:47.745 SO libspdk_bdev_virtio.so.6.0 00:04:48.053 SYMLINK libspdk_bdev_virtio.so 00:04:48.312 LIB libspdk_bdev_nvme.a 00:04:48.312 SO libspdk_bdev_nvme.so.7.0 00:04:48.570 SYMLINK libspdk_bdev_nvme.so 00:04:49.137 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:49.137 CC module/event/subsystems/vmd/vmd.o 00:04:49.137 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:49.137 CC module/event/subsystems/sock/sock.o 00:04:49.137 CC module/event/subsystems/scheduler/scheduler.o 00:04:49.137 CC module/event/subsystems/keyring/keyring.o 00:04:49.137 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:49.137 CC module/event/subsystems/iobuf/iobuf.o 00:04:49.396 LIB libspdk_event_vhost_blk.a 00:04:49.396 LIB libspdk_event_keyring.a 00:04:49.396 LIB libspdk_event_scheduler.a 00:04:49.396 LIB libspdk_event_vmd.a 00:04:49.396 LIB libspdk_event_sock.a 00:04:49.396 SO libspdk_event_vhost_blk.so.3.0 00:04:49.396 SO libspdk_event_keyring.so.1.0 00:04:49.396 LIB libspdk_event_iobuf.a 00:04:49.396 SO libspdk_event_sock.so.5.0 00:04:49.396 SO libspdk_event_scheduler.so.4.0 00:04:49.396 SO libspdk_event_vmd.so.6.0 00:04:49.396 SO libspdk_event_iobuf.so.3.0 00:04:49.396 SYMLINK libspdk_event_vhost_blk.so 00:04:49.396 SYMLINK libspdk_event_sock.so 00:04:49.396 SYMLINK libspdk_event_scheduler.so 00:04:49.396 SYMLINK libspdk_event_keyring.so 00:04:49.396 SYMLINK libspdk_event_vmd.so 00:04:49.396 SYMLINK libspdk_event_iobuf.so 00:04:49.962 CC module/event/subsystems/accel/accel.o 00:04:49.962 LIB libspdk_event_accel.a 00:04:49.962 SO libspdk_event_accel.so.6.0 00:04:50.220 SYMLINK libspdk_event_accel.so 00:04:50.477 CC module/event/subsystems/bdev/bdev.o 00:04:50.735 LIB libspdk_event_bdev.a 00:04:50.735 SO libspdk_event_bdev.so.6.0 00:04:50.735 SYMLINK libspdk_event_bdev.so 00:04:51.300 CC module/event/subsystems/scsi/scsi.o 00:04:51.300 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:51.300 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:51.300 CC module/event/subsystems/ublk/ublk.o 00:04:51.300 CC module/event/subsystems/nbd/nbd.o 00:04:51.300 LIB libspdk_event_scsi.a 00:04:51.300 LIB libspdk_event_nbd.a 00:04:51.300 LIB libspdk_event_ublk.a 00:04:51.300 SO libspdk_event_scsi.so.6.0 00:04:51.557 SO libspdk_event_ublk.so.3.0 00:04:51.557 SO libspdk_event_nbd.so.6.0 00:04:51.557 SYMLINK libspdk_event_scsi.so 00:04:51.557 LIB libspdk_event_nvmf.a 00:04:51.557 SYMLINK libspdk_event_nbd.so 00:04:51.557 SYMLINK libspdk_event_ublk.so 00:04:51.557 SO libspdk_event_nvmf.so.6.0 00:04:51.557 SYMLINK libspdk_event_nvmf.so 00:04:51.814 CC module/event/subsystems/iscsi/iscsi.o 00:04:51.814 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.087 LIB libspdk_event_vhost_scsi.a 00:04:52.087 LIB libspdk_event_iscsi.a 00:04:52.087 SO libspdk_event_vhost_scsi.so.3.0 00:04:52.087 SO libspdk_event_iscsi.so.6.0 00:04:52.087 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.087 SYMLINK libspdk_event_iscsi.so 00:04:52.360 SO libspdk.so.6.0 00:04:52.360 SYMLINK libspdk.so 00:04:52.619 CXX app/trace/trace.o 00:04:52.619 CC app/trace_record/trace_record.o 00:04:52.619 CC app/nvmf_tgt/nvmf_main.o 00:04:52.619 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.619 CC app/spdk_tgt/spdk_tgt.o 00:04:52.877 CC examples/accel/perf/accel_perf.o 00:04:52.877 CC test/bdev/bdevio/bdevio.o 00:04:52.877 CC test/accel/dif/dif.o 00:04:52.877 CC test/app/bdev_svc/bdev_svc.o 00:04:52.877 CC test/blobfs/mkfs/mkfs.o 00:04:52.877 LINK nvmf_tgt 00:04:52.877 LINK iscsi_tgt 00:04:52.877 LINK spdk_trace_record 00:04:52.877 LINK spdk_tgt 00:04:52.877 LINK bdev_svc 00:04:52.877 LINK mkfs 00:04:53.136 LINK spdk_trace 00:04:53.136 LINK bdevio 00:04:53.136 CC test/app/histogram_perf/histogram_perf.o 00:04:53.136 CC test/app/jsoncat/jsoncat.o 00:04:53.395 LINK dif 00:04:53.395 LINK accel_perf 00:04:53.395 CC test/app/stub/stub.o 00:04:53.395 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:53.395 TEST_HEADER include/spdk/accel.h 00:04:53.395 TEST_HEADER include/spdk/accel_module.h 00:04:53.395 TEST_HEADER include/spdk/assert.h 00:04:53.395 TEST_HEADER include/spdk/barrier.h 00:04:53.395 TEST_HEADER include/spdk/base64.h 00:04:53.395 TEST_HEADER include/spdk/bdev.h 00:04:53.395 TEST_HEADER include/spdk/bdev_module.h 00:04:53.395 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.395 TEST_HEADER include/spdk/bit_array.h 00:04:53.395 TEST_HEADER include/spdk/bit_pool.h 00:04:53.395 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.395 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.395 TEST_HEADER include/spdk/blobfs.h 00:04:53.395 TEST_HEADER include/spdk/blob.h 00:04:53.395 TEST_HEADER include/spdk/conf.h 00:04:53.395 TEST_HEADER include/spdk/config.h 00:04:53.395 TEST_HEADER include/spdk/cpuset.h 00:04:53.395 TEST_HEADER include/spdk/crc16.h 00:04:53.395 CC app/spdk_lspci/spdk_lspci.o 00:04:53.395 TEST_HEADER include/spdk/crc32.h 00:04:53.395 TEST_HEADER include/spdk/crc64.h 00:04:53.395 TEST_HEADER include/spdk/dif.h 00:04:53.395 TEST_HEADER include/spdk/dma.h 00:04:53.395 TEST_HEADER include/spdk/endian.h 00:04:53.395 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.395 TEST_HEADER include/spdk/env.h 00:04:53.395 TEST_HEADER include/spdk/event.h 00:04:53.395 TEST_HEADER include/spdk/fd_group.h 00:04:53.395 LINK histogram_perf 00:04:53.395 TEST_HEADER include/spdk/fd.h 00:04:53.395 TEST_HEADER include/spdk/file.h 00:04:53.395 TEST_HEADER include/spdk/ftl.h 00:04:53.395 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.395 LINK jsoncat 00:04:53.395 TEST_HEADER include/spdk/hexlify.h 00:04:53.395 TEST_HEADER include/spdk/histogram_data.h 00:04:53.395 TEST_HEADER include/spdk/idxd.h 00:04:53.395 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.395 TEST_HEADER include/spdk/init.h 00:04:53.395 TEST_HEADER include/spdk/ioat.h 00:04:53.395 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.395 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.395 TEST_HEADER include/spdk/json.h 00:04:53.395 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.395 TEST_HEADER include/spdk/keyring.h 00:04:53.395 TEST_HEADER include/spdk/keyring_module.h 00:04:53.395 TEST_HEADER include/spdk/likely.h 00:04:53.395 TEST_HEADER include/spdk/log.h 00:04:53.395 TEST_HEADER include/spdk/lvol.h 00:04:53.395 TEST_HEADER include/spdk/memory.h 00:04:53.395 TEST_HEADER include/spdk/mmio.h 00:04:53.395 TEST_HEADER include/spdk/nbd.h 00:04:53.395 TEST_HEADER include/spdk/notify.h 00:04:53.395 CC test/dma/test_dma/test_dma.o 00:04:53.395 TEST_HEADER include/spdk/nvme.h 00:04:53.395 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.395 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.395 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.395 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.395 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.395 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.395 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.395 TEST_HEADER include/spdk/nvmf.h 00:04:53.395 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.653 LINK spdk_lspci 00:04:53.653 LINK stub 00:04:53.653 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.653 TEST_HEADER include/spdk/opal.h 00:04:53.653 TEST_HEADER include/spdk/opal_spec.h 00:04:53.653 TEST_HEADER include/spdk/pci_ids.h 00:04:53.653 TEST_HEADER include/spdk/pipe.h 00:04:53.653 TEST_HEADER include/spdk/queue.h 00:04:53.653 TEST_HEADER include/spdk/reduce.h 00:04:53.653 TEST_HEADER include/spdk/rpc.h 00:04:53.653 TEST_HEADER include/spdk/scheduler.h 00:04:53.653 TEST_HEADER include/spdk/scsi.h 00:04:53.653 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.653 TEST_HEADER include/spdk/sock.h 00:04:53.653 TEST_HEADER include/spdk/stdinc.h 00:04:53.653 TEST_HEADER include/spdk/string.h 00:04:53.653 TEST_HEADER include/spdk/thread.h 00:04:53.653 TEST_HEADER include/spdk/trace.h 00:04:53.653 TEST_HEADER include/spdk/trace_parser.h 00:04:53.653 TEST_HEADER include/spdk/tree.h 00:04:53.653 TEST_HEADER include/spdk/ublk.h 00:04:53.653 CC app/spdk_nvme_perf/perf.o 00:04:53.653 TEST_HEADER include/spdk/util.h 00:04:53.653 TEST_HEADER include/spdk/uuid.h 00:04:53.653 TEST_HEADER include/spdk/version.h 00:04:53.653 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.653 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.653 TEST_HEADER include/spdk/vhost.h 00:04:53.653 TEST_HEADER include/spdk/vmd.h 00:04:53.653 TEST_HEADER include/spdk/xor.h 00:04:53.653 TEST_HEADER include/spdk/zipf.h 00:04:53.653 CXX test/cpp_headers/accel.o 00:04:53.653 CC app/spdk_nvme_identify/identify.o 00:04:53.653 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:53.653 CC examples/bdev/hello_world/hello_bdev.o 00:04:53.653 LINK nvme_fuzz 00:04:53.911 CXX test/cpp_headers/accel_module.o 00:04:53.911 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:53.911 CC examples/blob/hello_world/hello_blob.o 00:04:53.911 LINK test_dma 00:04:53.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:53.911 CXX test/cpp_headers/assert.o 00:04:53.911 LINK hello_bdev 00:04:53.911 LINK spdk_nvme_discover 00:04:54.169 LINK hello_blob 00:04:54.169 CXX test/cpp_headers/barrier.o 00:04:54.169 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.169 CC test/env/vtophys/vtophys.o 00:04:54.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.169 CXX test/cpp_headers/base64.o 00:04:54.427 LINK vhost_fuzz 00:04:54.427 CC examples/bdev/bdevperf/bdevperf.o 00:04:54.427 LINK vtophys 00:04:54.427 LINK env_dpdk_post_init 00:04:54.427 CXX test/cpp_headers/bdev.o 00:04:54.427 CC examples/blob/cli/blobcli.o 00:04:54.427 LINK spdk_nvme_perf 00:04:54.686 CXX test/cpp_headers/bdev_module.o 00:04:54.686 LINK spdk_nvme_identify 00:04:54.686 CC test/event/event_perf/event_perf.o 00:04:54.686 LINK mem_callbacks 00:04:54.686 CC test/nvme/aer/aer.o 00:04:54.686 CXX test/cpp_headers/bdev_zone.o 00:04:54.944 CC test/lvol/esnap/esnap.o 00:04:54.944 CC examples/ioat/perf/perf.o 00:04:54.944 LINK event_perf 00:04:54.944 CC app/spdk_top/spdk_top.o 00:04:54.944 LINK blobcli 00:04:54.944 CC test/env/memory/memory_ut.o 00:04:54.944 CXX test/cpp_headers/bit_array.o 00:04:55.202 LINK aer 00:04:55.202 CC test/event/reactor/reactor.o 00:04:55.202 LINK ioat_perf 00:04:55.202 CXX test/cpp_headers/bit_pool.o 00:04:55.202 LINK bdevperf 00:04:55.202 CXX test/cpp_headers/blob_bdev.o 00:04:55.202 LINK reactor 00:04:55.460 CC test/nvme/reset/reset.o 00:04:55.460 CC examples/ioat/verify/verify.o 00:04:55.460 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.460 CC test/env/pci/pci_ut.o 00:04:55.460 CC test/event/reactor_perf/reactor_perf.o 00:04:55.460 CC app/vhost/vhost.o 00:04:55.718 CXX test/cpp_headers/blobfs.o 00:04:55.718 LINK iscsi_fuzz 00:04:55.718 LINK verify 00:04:55.718 LINK reset 00:04:55.718 LINK reactor_perf 00:04:55.718 LINK vhost 00:04:55.718 CXX test/cpp_headers/blob.o 00:04:55.718 LINK pci_ut 00:04:55.976 CC test/nvme/sgl/sgl.o 00:04:55.976 LINK spdk_top 00:04:55.976 CXX test/cpp_headers/conf.o 00:04:55.976 CC test/event/app_repeat/app_repeat.o 00:04:55.976 CC examples/nvme/hello_world/hello_world.o 00:04:55.976 CC test/nvme/e2edp/nvme_dp.o 00:04:55.976 CXX test/cpp_headers/config.o 00:04:55.976 LINK app_repeat 00:04:56.234 CC examples/sock/hello_world/hello_sock.o 00:04:56.234 LINK memory_ut 00:04:56.234 CXX test/cpp_headers/cpuset.o 00:04:56.234 LINK sgl 00:04:56.234 CC app/spdk_dd/spdk_dd.o 00:04:56.234 LINK hello_world 00:04:56.234 CC test/nvme/overhead/overhead.o 00:04:56.234 CXX test/cpp_headers/crc16.o 00:04:56.234 LINK nvme_dp 00:04:56.492 LINK hello_sock 00:04:56.492 CXX test/cpp_headers/crc32.o 00:04:56.492 CC test/event/scheduler/scheduler.o 00:04:56.492 CC test/nvme/err_injection/err_injection.o 00:04:56.492 CC examples/nvme/reconnect/reconnect.o 00:04:56.492 LINK overhead 00:04:56.493 CXX test/cpp_headers/crc64.o 00:04:56.493 CXX test/cpp_headers/dif.o 00:04:56.493 CC test/nvme/startup/startup.o 00:04:56.493 LINK spdk_dd 00:04:56.750 LINK err_injection 00:04:56.750 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:56.750 LINK scheduler 00:04:56.750 CXX test/cpp_headers/dma.o 00:04:56.750 LINK startup 00:04:56.750 CC examples/nvme/arbitration/arbitration.o 00:04:56.750 CC examples/nvme/hotplug/hotplug.o 00:04:56.750 CXX test/cpp_headers/endian.o 00:04:56.750 LINK reconnect 00:04:57.009 CXX test/cpp_headers/env_dpdk.o 00:04:57.009 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:57.009 CC test/nvme/reserve/reserve.o 00:04:57.009 CC app/fio/nvme/fio_plugin.o 00:04:57.009 LINK hotplug 00:04:57.009 CC examples/vmd/lsvmd/lsvmd.o 00:04:57.009 CC examples/nvme/abort/abort.o 00:04:57.009 LINK arbitration 00:04:57.266 CXX test/cpp_headers/env.o 00:04:57.266 LINK cmb_copy 00:04:57.266 LINK lsvmd 00:04:57.266 LINK nvme_manage 00:04:57.266 LINK reserve 00:04:57.266 CXX test/cpp_headers/event.o 00:04:57.266 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:57.524 CC test/rpc_client/rpc_client_test.o 00:04:57.524 CC examples/vmd/led/led.o 00:04:57.524 CC test/nvme/simple_copy/simple_copy.o 00:04:57.524 CXX test/cpp_headers/fd_group.o 00:04:57.524 LINK abort 00:04:57.524 LINK pmr_persistence 00:04:57.524 CC examples/nvmf/nvmf/nvmf.o 00:04:57.524 CC examples/util/zipf/zipf.o 00:04:57.524 LINK led 00:04:57.782 LINK spdk_nvme 00:04:57.782 LINK rpc_client_test 00:04:57.782 CXX test/cpp_headers/fd.o 00:04:57.782 LINK simple_copy 00:04:57.782 LINK zipf 00:04:57.782 CXX test/cpp_headers/file.o 00:04:57.782 CC test/nvme/connect_stress/connect_stress.o 00:04:57.782 CC test/nvme/boot_partition/boot_partition.o 00:04:57.782 CC app/fio/bdev/fio_plugin.o 00:04:58.040 CC test/nvme/compliance/nvme_compliance.o 00:04:58.040 LINK nvmf 00:04:58.040 CXX test/cpp_headers/ftl.o 00:04:58.040 LINK connect_stress 00:04:58.040 LINK boot_partition 00:04:58.040 CC examples/thread/thread/thread_ex.o 00:04:58.040 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.040 CXX test/cpp_headers/gpt_spec.o 00:04:58.298 CC test/thread/poller_perf/poller_perf.o 00:04:58.298 LINK nvme_compliance 00:04:58.298 LINK thread 00:04:58.298 CXX test/cpp_headers/hexlify.o 00:04:58.298 LINK fused_ordering 00:04:58.298 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.298 CC test/nvme/fdp/fdp.o 00:04:58.298 LINK poller_perf 00:04:58.298 CC examples/idxd/perf/perf.o 00:04:58.298 LINK spdk_bdev 00:04:58.556 CXX test/cpp_headers/histogram_data.o 00:04:58.556 CXX test/cpp_headers/idxd.o 00:04:58.556 CXX test/cpp_headers/idxd_spec.o 00:04:58.556 LINK doorbell_aers 00:04:58.556 CXX test/cpp_headers/init.o 00:04:58.556 CXX test/cpp_headers/ioat.o 00:04:58.556 CXX test/cpp_headers/ioat_spec.o 00:04:58.556 CXX test/cpp_headers/iscsi_spec.o 00:04:58.814 CXX test/cpp_headers/json.o 00:04:58.814 CXX test/cpp_headers/jsonrpc.o 00:04:58.814 CXX test/cpp_headers/keyring.o 00:04:58.814 LINK fdp 00:04:58.814 CXX test/cpp_headers/keyring_module.o 00:04:58.814 LINK idxd_perf 00:04:58.814 CC test/nvme/cuse/cuse.o 00:04:58.814 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:58.814 CXX test/cpp_headers/likely.o 00:04:58.814 CXX test/cpp_headers/log.o 00:04:58.814 CXX test/cpp_headers/lvol.o 00:04:58.814 CXX test/cpp_headers/memory.o 00:04:58.814 CXX test/cpp_headers/mmio.o 00:04:58.814 CXX test/cpp_headers/nbd.o 00:04:58.814 CXX test/cpp_headers/notify.o 00:04:58.814 LINK interrupt_tgt 00:04:58.814 CXX test/cpp_headers/nvme.o 00:04:59.076 CXX test/cpp_headers/nvme_intel.o 00:04:59.076 CXX test/cpp_headers/nvme_ocssd.o 00:04:59.076 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:59.076 CXX test/cpp_headers/nvme_spec.o 00:04:59.076 CXX test/cpp_headers/nvme_zns.o 00:04:59.076 CXX test/cpp_headers/nvmf_cmd.o 00:04:59.076 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:59.076 CXX test/cpp_headers/nvmf.o 00:04:59.076 CXX test/cpp_headers/nvmf_spec.o 00:04:59.076 CXX test/cpp_headers/nvmf_transport.o 00:04:59.076 CXX test/cpp_headers/opal.o 00:04:59.076 CXX test/cpp_headers/opal_spec.o 00:04:59.334 CXX test/cpp_headers/pci_ids.o 00:04:59.334 CXX test/cpp_headers/pipe.o 00:04:59.334 CXX test/cpp_headers/queue.o 00:04:59.334 CXX test/cpp_headers/reduce.o 00:04:59.334 CXX test/cpp_headers/rpc.o 00:04:59.334 CXX test/cpp_headers/scheduler.o 00:04:59.334 CXX test/cpp_headers/scsi.o 00:04:59.334 CXX test/cpp_headers/scsi_spec.o 00:04:59.334 CXX test/cpp_headers/sock.o 00:04:59.334 CXX test/cpp_headers/stdinc.o 00:04:59.334 CXX test/cpp_headers/string.o 00:04:59.334 CXX test/cpp_headers/thread.o 00:04:59.592 CXX test/cpp_headers/trace.o 00:04:59.592 CXX test/cpp_headers/trace_parser.o 00:04:59.592 CXX test/cpp_headers/tree.o 00:04:59.592 CXX test/cpp_headers/ublk.o 00:04:59.592 CXX test/cpp_headers/util.o 00:04:59.592 CXX test/cpp_headers/uuid.o 00:04:59.592 CXX test/cpp_headers/version.o 00:04:59.592 CXX test/cpp_headers/vfio_user_pci.o 00:04:59.592 CXX test/cpp_headers/vfio_user_spec.o 00:04:59.592 CXX test/cpp_headers/vhost.o 00:04:59.592 CXX test/cpp_headers/vmd.o 00:04:59.592 CXX test/cpp_headers/xor.o 00:04:59.592 CXX test/cpp_headers/zipf.o 00:05:00.157 LINK cuse 00:05:00.158 LINK esnap 00:05:00.725 00:05:00.725 real 0m53.653s 00:05:00.725 user 4m30.269s 00:05:00.725 sys 1m21.132s 00:05:00.725 01:14:45 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:00.725 01:14:45 make -- common/autotest_common.sh@10 -- $ set +x 00:05:00.725 ************************************ 00:05:00.725 END TEST make 00:05:00.725 ************************************ 00:05:00.725 01:14:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:00.725 01:14:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:00.725 01:14:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:00.725 01:14:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.725 01:14:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:00.725 01:14:46 -- pm/common@44 -- $ pid=5914 00:05:00.725 01:14:46 -- pm/common@50 -- $ kill -TERM 5914 00:05:00.725 01:14:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.725 01:14:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:00.725 01:14:46 -- pm/common@44 -- $ pid=5916 00:05:00.725 01:14:46 -- pm/common@50 -- $ kill -TERM 5916 00:05:00.983 01:14:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.983 01:14:46 -- nvmf/common.sh@7 -- # uname -s 00:05:00.983 01:14:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.983 01:14:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.983 01:14:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.983 01:14:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.983 01:14:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.983 01:14:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.983 01:14:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.983 01:14:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.983 01:14:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.983 01:14:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.983 01:14:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:05:00.983 01:14:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:05:00.983 01:14:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.984 01:14:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.984 01:14:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.984 01:14:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.984 01:14:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.984 01:14:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.984 01:14:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.984 01:14:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.984 01:14:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.984 01:14:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.984 01:14:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.984 01:14:46 -- paths/export.sh@5 -- # export PATH 00:05:00.984 01:14:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.984 01:14:46 -- nvmf/common.sh@47 -- # : 0 00:05:00.984 01:14:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:00.984 01:14:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:00.984 01:14:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.984 01:14:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.984 01:14:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.984 01:14:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:00.984 01:14:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:00.984 01:14:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:00.984 01:14:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:00.984 01:14:46 -- spdk/autotest.sh@32 -- # uname -s 00:05:00.984 01:14:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:00.984 01:14:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:00.984 01:14:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.984 01:14:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:00.984 01:14:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.984 01:14:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:00.984 01:14:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:00.984 01:14:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:00.984 01:14:46 -- spdk/autotest.sh@48 -- # udevadm_pid=65894 00:05:00.984 01:14:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:00.984 01:14:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:00.984 01:14:46 -- pm/common@17 -- # local monitor 00:05:00.984 01:14:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.984 01:14:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.984 01:14:46 -- pm/common@25 -- # sleep 1 00:05:00.984 01:14:46 -- pm/common@21 -- # date +%s 00:05:00.984 01:14:46 -- pm/common@21 -- # date +%s 00:05:00.984 01:14:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721524486 00:05:00.984 01:14:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721524486 00:05:00.984 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721524486_collect-cpu-load.pm.log 00:05:00.984 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721524486_collect-vmstat.pm.log 00:05:02.361 01:14:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.361 01:14:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.361 01:14:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:02.361 01:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.361 01:14:47 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.361 01:14:47 -- common/autotest_common.sh@744 -- # xtrace_disable 00:05:02.361 01:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:02.361 01:14:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.361 01:14:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.361 01:14:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.361 01:14:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.361 01:14:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.361 01:14:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.361 01:14:47 -- common/autotest_common.sh@1451 -- # uname 00:05:02.361 01:14:47 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:05:02.361 01:14:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.361 01:14:47 -- common/autotest_common.sh@1471 -- # uname 00:05:02.361 01:14:47 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:05:02.361 01:14:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:02.361 01:14:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:02.361 01:14:47 -- spdk/autotest.sh@72 -- # hash lcov 00:05:02.361 01:14:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:02.361 01:14:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:02.361 --rc lcov_branch_coverage=1 00:05:02.361 --rc lcov_function_coverage=1 00:05:02.361 --rc genhtml_branch_coverage=1 00:05:02.361 --rc genhtml_function_coverage=1 00:05:02.361 --rc genhtml_legend=1 00:05:02.361 --rc geninfo_all_blocks=1 00:05:02.361 ' 00:05:02.361 01:14:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:02.361 --rc lcov_branch_coverage=1 00:05:02.361 --rc lcov_function_coverage=1 00:05:02.361 --rc genhtml_branch_coverage=1 00:05:02.361 --rc genhtml_function_coverage=1 00:05:02.361 --rc genhtml_legend=1 00:05:02.361 --rc geninfo_all_blocks=1 00:05:02.361 ' 00:05:02.361 01:14:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:02.361 --rc lcov_branch_coverage=1 00:05:02.361 --rc lcov_function_coverage=1 00:05:02.361 --rc genhtml_branch_coverage=1 00:05:02.361 --rc genhtml_function_coverage=1 00:05:02.361 --rc genhtml_legend=1 00:05:02.361 --rc geninfo_all_blocks=1 00:05:02.361 --no-external' 00:05:02.361 01:14:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:02.361 --rc lcov_branch_coverage=1 00:05:02.361 --rc lcov_function_coverage=1 00:05:02.361 --rc genhtml_branch_coverage=1 00:05:02.361 --rc genhtml_function_coverage=1 00:05:02.361 --rc genhtml_legend=1 00:05:02.361 --rc geninfo_all_blocks=1 00:05:02.361 --no-external' 00:05:02.361 01:14:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:02.361 lcov: LCOV version 1.14 00:05:02.361 01:14:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.325 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:17.325 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:27.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:27.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:27.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:27.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:27.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:27.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:28.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:28.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:28.332 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:28.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:31.615 01:15:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:31.615 01:15:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.615 01:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.615 01:15:16 -- spdk/autotest.sh@91 -- # rm -f 00:05:31.615 01:15:16 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.183 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.752 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:32.752 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:32.752 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:32.752 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:32.752 01:15:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:32.752 01:15:17 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:32.752 01:15:17 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:32.752 01:15:17 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.752 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.752 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:32.752 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:32.753 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.753 01:15:17 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:32.753 01:15:17 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:32.753 01:15:17 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:32.753 01:15:17 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:32.753 01:15:17 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:32.753 01:15:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:32.753 01:15:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.753 01:15:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:32.753 01:15:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:32.753 01:15:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:32.753 01:15:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.753 No valid GPT data, bailing 00:05:32.753 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.753 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:32.753 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:32.753 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:33.012 1+0 records in 00:05:33.012 1+0 records out 00:05:33.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176101 s, 59.5 MB/s 00:05:33.012 01:15:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.012 01:15:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.012 01:15:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:33.012 01:15:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:33.012 01:15:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:33.012 No valid GPT data, bailing 00:05:33.012 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:33.012 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:33.012 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:33.012 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:33.012 1+0 records in 00:05:33.012 1+0 records out 00:05:33.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683276 s, 153 MB/s 00:05:33.012 01:15:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.012 01:15:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.012 01:15:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:33.012 01:15:18 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:33.012 01:15:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:33.012 No valid GPT data, bailing 00:05:33.012 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:33.012 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:33.012 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:33.012 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:33.013 1+0 records in 00:05:33.013 1+0 records out 00:05:33.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699257 s, 150 MB/s 00:05:33.013 01:15:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.013 01:15:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.013 01:15:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:33.013 01:15:18 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:33.013 01:15:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:33.013 No valid GPT data, bailing 00:05:33.013 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:33.272 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:33.272 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:33.272 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:33.272 1+0 records in 00:05:33.272 1+0 records out 00:05:33.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662657 s, 158 MB/s 00:05:33.272 01:15:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.272 01:15:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.272 01:15:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:33.272 01:15:18 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:33.272 01:15:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:33.272 No valid GPT data, bailing 00:05:33.272 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:33.272 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:33.272 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:33.272 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:33.272 1+0 records in 00:05:33.272 1+0 records out 00:05:33.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513137 s, 204 MB/s 00:05:33.272 01:15:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:33.272 01:15:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:33.272 01:15:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:33.272 01:15:18 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:33.272 01:15:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:33.272 No valid GPT data, bailing 00:05:33.272 01:15:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:33.272 01:15:18 -- scripts/common.sh@391 -- # pt= 00:05:33.272 01:15:18 -- scripts/common.sh@392 -- # return 1 00:05:33.272 01:15:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:33.272 1+0 records in 00:05:33.272 1+0 records out 00:05:33.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432108 s, 243 MB/s 00:05:33.272 01:15:18 -- spdk/autotest.sh@118 -- # sync 00:05:33.531 01:15:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:33.531 01:15:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:33.531 01:15:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.820 01:15:21 -- spdk/autotest.sh@124 -- # uname -s 00:05:36.820 01:15:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:36.820 01:15:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:36.820 01:15:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.820 01:15:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.820 01:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:36.820 ************************************ 00:05:36.820 START TEST setup.sh 00:05:36.820 ************************************ 00:05:36.820 01:15:21 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:36.820 * Looking for test storage... 00:05:36.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.820 01:15:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:36.820 01:15:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:36.820 01:15:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:36.820 01:15:21 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.820 01:15:21 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.820 01:15:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:36.820 ************************************ 00:05:36.820 START TEST acl 00:05:36.820 ************************************ 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:36.820 * Looking for test storage... 00:05:36.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:36.820 01:15:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:36.820 01:15:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:36.820 01:15:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.820 01:15:21 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.195 01:15:23 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:38.195 01:15:23 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:38.195 01:15:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:38.195 01:15:23 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:38.195 01:15:23 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.195 01:15:23 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:38.763 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:38.763 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:38.763 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.330 Hugepages 00:05:39.330 node hugesize free / total 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.330 00:05:39.330 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:39.330 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:39.589 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:39.848 01:15:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.848 01:15:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:39.848 01:15:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:39.848 01:15:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:39.848 01:15:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:39.849 01:15:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:39.849 01:15:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:40.108 01:15:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:40.108 01:15:25 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.108 01:15:25 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.108 01:15:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:40.108 ************************************ 00:05:40.108 START TEST denied 00:05:40.108 ************************************ 00:05:40.108 01:15:25 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:40.108 01:15:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:40.108 01:15:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:40.108 01:15:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.108 01:15:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:40.108 01:15:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.022 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.022 01:15:26 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:48.590 00:05:48.590 real 0m8.030s 00:05:48.590 user 0m0.959s 00:05:48.590 sys 0m2.170s 00:05:48.590 01:15:33 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.590 ************************************ 00:05:48.590 END TEST denied 00:05:48.590 01:15:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 ************************************ 00:05:48.590 01:15:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:48.590 01:15:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.590 01:15:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.590 01:15:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:48.590 ************************************ 00:05:48.590 START TEST allowed 00:05:48.590 ************************************ 00:05:48.590 01:15:33 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:48.590 01:15:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:48.590 01:15:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:48.590 01:15:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:48.590 01:15:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.590 01:15:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.527 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.527 01:15:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:51.453 00:05:51.453 real 0m2.947s 00:05:51.453 user 0m1.143s 00:05:51.453 sys 0m1.809s 00:05:51.453 01:15:36 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.453 01:15:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:51.453 ************************************ 00:05:51.453 END TEST allowed 00:05:51.453 ************************************ 00:05:51.453 00:05:51.453 real 0m14.700s 00:05:51.453 user 0m3.583s 00:05:51.453 sys 0m6.229s 00:05:51.453 01:15:36 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.453 01:15:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:51.453 ************************************ 00:05:51.453 END TEST acl 00:05:51.453 ************************************ 00:05:51.453 01:15:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:51.453 01:15:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.453 01:15:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.453 01:15:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.453 ************************************ 00:05:51.453 START TEST hugepages 00:05:51.453 ************************************ 00:05:51.453 01:15:36 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:51.453 * Looking for test storage... 00:05:51.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4406628 kB' 'MemAvailable: 7383932 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 450680 kB' 'Inactive: 2840072 kB' 'Active(anon): 118656 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840072 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 109832 kB' 'Mapped: 48684 kB' 'Shmem: 10516 kB' 'KReclaimable: 84296 kB' 'Slab: 166912 kB' 'SReclaimable: 84296 kB' 'SUnreclaim: 82616 kB' 'KernelStack: 6556 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 343796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.453 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:51.454 01:15:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:51.454 01:15:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.454 01:15:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.454 01:15:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:51.454 ************************************ 00:05:51.454 START TEST default_setup 00:05:51.454 ************************************ 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.454 01:15:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:52.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.066 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.066 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6440156 kB' 'MemAvailable: 9417228 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462916 kB' 'Inactive: 2840096 kB' 'Active(anon): 130892 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121988 kB' 'Mapped: 48800 kB' 'Shmem: 10476 kB' 'KReclaimable: 83780 kB' 'Slab: 166188 kB' 'SReclaimable: 83780 kB' 'SUnreclaim: 82408 kB' 'KernelStack: 6576 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.066 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.067 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6439904 kB' 'MemAvailable: 9416976 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462544 kB' 'Inactive: 2840096 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121640 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83780 kB' 'Slab: 166180 kB' 'SReclaimable: 83780 kB' 'SUnreclaim: 82400 kB' 'KernelStack: 6576 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.068 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.334 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6439904 kB' 'MemAvailable: 9416976 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462328 kB' 'Inactive: 2840096 kB' 'Active(anon): 130304 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121412 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83780 kB' 'Slab: 166180 kB' 'SReclaimable: 83780 kB' 'SUnreclaim: 82400 kB' 'KernelStack: 6560 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.335 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.336 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:53.337 nr_hugepages=1024 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:53.337 resv_hugepages=0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:53.337 surplus_hugepages=0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:53.337 anon_hugepages=0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.337 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6439904 kB' 'MemAvailable: 9416976 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462588 kB' 'Inactive: 2840096 kB' 'Active(anon): 130564 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121672 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83780 kB' 'Slab: 166180 kB' 'SReclaimable: 83780 kB' 'SUnreclaim: 82400 kB' 'KernelStack: 6560 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.338 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.339 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6439652 kB' 'MemUsed: 5802320 kB' 'SwapCached: 0 kB' 'Active: 462544 kB' 'Inactive: 2840096 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840096 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3182604 kB' 'Mapped: 48676 kB' 'AnonPages: 121648 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83780 kB' 'Slab: 166180 kB' 'SReclaimable: 83780 kB' 'SUnreclaim: 82400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.340 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.341 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:53.342 node0=1024 expecting 1024 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:53.342 00:05:53.342 real 0m1.850s 00:05:53.342 user 0m0.708s 00:05:53.342 sys 0m1.129s 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.342 01:15:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:53.342 ************************************ 00:05:53.342 END TEST default_setup 00:05:53.342 ************************************ 00:05:53.342 01:15:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:53.342 01:15:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.342 01:15:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.342 01:15:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:53.342 ************************************ 00:05:53.342 START TEST per_node_1G_alloc 00:05:53.342 ************************************ 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.342 01:15:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.172 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.172 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7496440 kB' 'MemAvailable: 10473516 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462396 kB' 'Inactive: 2840104 kB' 'Active(anon): 130372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121728 kB' 'Mapped: 48804 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166156 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82380 kB' 'KernelStack: 6552 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.172 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7496440 kB' 'MemAvailable: 10473516 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462304 kB' 'Inactive: 2840104 kB' 'Active(anon): 130280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121676 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166180 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82404 kB' 'KernelStack: 6576 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.173 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.174 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.437 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7497076 kB' 'MemAvailable: 10474152 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462564 kB' 'Inactive: 2840104 kB' 'Active(anon): 130540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121680 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166180 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82404 kB' 'KernelStack: 6576 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.438 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.439 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:54.440 nr_hugepages=512 00:05:54.440 resv_hugepages=0 00:05:54.440 surplus_hugepages=0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:54.440 anon_hugepages=0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7497076 kB' 'MemAvailable: 10474152 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462364 kB' 'Inactive: 2840104 kB' 'Active(anon): 130340 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121696 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166180 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82404 kB' 'KernelStack: 6576 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.440 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.441 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7505672 kB' 'MemUsed: 4736300 kB' 'SwapCached: 0 kB' 'Active: 462384 kB' 'Inactive: 2840104 kB' 'Active(anon): 130360 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 3182604 kB' 'Mapped: 48672 kB' 'AnonPages: 121524 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83776 kB' 'Slab: 166176 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.442 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:54.443 node0=512 expecting 512 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:54.443 00:05:54.443 real 0m1.036s 00:05:54.443 user 0m0.421s 00:05:54.443 sys 0m0.670s 00:05:54.443 ************************************ 00:05:54.443 END TEST per_node_1G_alloc 00:05:54.443 ************************************ 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.443 01:15:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 01:15:39 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:54.443 01:15:39 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.443 01:15:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.443 01:15:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 ************************************ 00:05:54.443 START TEST even_2G_alloc 00:05:54.443 ************************************ 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.443 01:15:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:55.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.272 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.272 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.272 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.272 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6451288 kB' 'MemAvailable: 9428364 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462808 kB' 'Inactive: 2840104 kB' 'Active(anon): 130784 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121936 kB' 'Mapped: 48744 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166144 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82368 kB' 'KernelStack: 6536 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55332 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.272 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6451288 kB' 'MemAvailable: 9428364 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462612 kB' 'Inactive: 2840104 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48616 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166168 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82392 kB' 'KernelStack: 6528 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55300 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.273 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6451288 kB' 'MemAvailable: 9428364 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462324 kB' 'Inactive: 2840104 kB' 'Active(anon): 130300 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121712 kB' 'Mapped: 48616 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166164 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82388 kB' 'KernelStack: 6512 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.274 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.536 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:55.537 nr_hugepages=1024 00:05:55.537 resv_hugepages=0 00:05:55.537 surplus_hugepages=0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:55.537 anon_hugepages=0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.537 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6460104 kB' 'MemAvailable: 9437180 kB' 'Buffers: 2436 kB' 'Cached: 3180168 kB' 'SwapCached: 0 kB' 'Active: 462932 kB' 'Inactive: 2840104 kB' 'Active(anon): 130908 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121876 kB' 'Mapped: 48936 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166188 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82412 kB' 'KernelStack: 6608 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55316 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.538 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6459416 kB' 'MemUsed: 5782556 kB' 'SwapCached: 0 kB' 'Active: 462276 kB' 'Inactive: 2840108 kB' 'Active(anon): 130252 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3182608 kB' 'Mapped: 48676 kB' 'AnonPages: 121704 kB' 'Shmem: 10476 kB' 'KernelStack: 6576 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83776 kB' 'Slab: 166184 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.539 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.540 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:55.541 node0=1024 expecting 1024 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:55.541 00:05:55.541 real 0m1.020s 00:05:55.541 user 0m0.441s 00:05:55.541 sys 0m0.637s 00:05:55.541 ************************************ 00:05:55.541 END TEST even_2G_alloc 00:05:55.541 ************************************ 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.541 01:15:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:55.541 01:15:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:55.541 01:15:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.541 01:15:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.541 01:15:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:55.541 ************************************ 00:05:55.541 START TEST odd_alloc 00:05:55.541 ************************************ 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.541 01:15:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.368 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.368 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.368 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.368 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.368 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6463084 kB' 'MemAvailable: 9440168 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 462564 kB' 'Inactive: 2840112 kB' 'Active(anon): 130540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121892 kB' 'Mapped: 48808 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166176 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82400 kB' 'KernelStack: 6584 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55332 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.369 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6463084 kB' 'MemAvailable: 9440168 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 462460 kB' 'Inactive: 2840112 kB' 'Active(anon): 130436 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121676 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166188 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82412 kB' 'KernelStack: 6624 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.370 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.371 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.634 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6463604 kB' 'MemAvailable: 9440688 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 462268 kB' 'Inactive: 2840112 kB' 'Active(anon): 130244 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121444 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166184 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82408 kB' 'KernelStack: 6576 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.635 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.636 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:56.637 nr_hugepages=1025 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:56.637 resv_hugepages=0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:56.637 surplus_hugepages=0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:56.637 anon_hugepages=0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6463952 kB' 'MemAvailable: 9441036 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 462236 kB' 'Inactive: 2840112 kB' 'Active(anon): 130212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121388 kB' 'Mapped: 48676 kB' 'Shmem: 10476 kB' 'KReclaimable: 83776 kB' 'Slab: 166176 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82400 kB' 'KernelStack: 6560 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.637 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:56.638 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6464016 kB' 'MemUsed: 5777956 kB' 'SwapCached: 0 kB' 'Active: 462448 kB' 'Inactive: 2840112 kB' 'Active(anon): 130424 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3182612 kB' 'Mapped: 48676 kB' 'AnonPages: 121364 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83776 kB' 'Slab: 166176 kB' 'SReclaimable: 83776 kB' 'SUnreclaim: 82400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.639 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:56.640 node0=1025 expecting 1025 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:56.640 00:05:56.640 real 0m0.973s 00:05:56.640 user 0m0.387s 00:05:56.640 sys 0m0.652s 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.640 01:15:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:56.640 ************************************ 00:05:56.640 END TEST odd_alloc 00:05:56.640 ************************************ 00:05:56.640 01:15:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:56.640 01:15:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.640 01:15:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.640 01:15:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:56.640 ************************************ 00:05:56.640 START TEST custom_alloc 00:05:56.640 ************************************ 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.640 01:15:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.469 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.469 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.469 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.469 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.469 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7521632 kB' 'MemAvailable: 10498712 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 459228 kB' 'Inactive: 2840108 kB' 'Active(anon): 127204 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 48076 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166036 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82264 kB' 'KernelStack: 6452 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.470 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7522564 kB' 'MemAvailable: 10499644 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 458860 kB' 'Inactive: 2840108 kB' 'Active(anon): 126836 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118044 kB' 'Mapped: 48200 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166036 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82264 kB' 'KernelStack: 6496 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 344932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.471 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.472 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.473 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7522564 kB' 'MemAvailable: 10499644 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 458872 kB' 'Inactive: 2840108 kB' 'Active(anon): 126848 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118280 kB' 'Mapped: 48000 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166036 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82264 kB' 'KernelStack: 6512 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.474 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.475 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:57.476 nr_hugepages=512 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:57.476 resv_hugepages=0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:57.476 surplus_hugepages=0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:57.476 anon_hugepages=0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:57.476 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7522312 kB' 'MemAvailable: 10499392 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 458856 kB' 'Inactive: 2840108 kB' 'Active(anon): 126832 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118268 kB' 'Mapped: 47976 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166036 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82264 kB' 'KernelStack: 6496 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.736 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.737 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7522312 kB' 'MemUsed: 4719660 kB' 'SwapCached: 0 kB' 'Active: 458992 kB' 'Inactive: 2840108 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 3182608 kB' 'Mapped: 47976 kB' 'AnonPages: 118356 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83772 kB' 'Slab: 166036 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.738 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:57.739 node0=512 expecting 512 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:57.739 00:05:57.739 real 0m1.027s 00:05:57.739 user 0m0.434s 00:05:57.739 sys 0m0.665s 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.739 01:15:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 END TEST custom_alloc 00:05:57.739 ************************************ 00:05:57.739 01:15:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:57.739 01:15:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.739 01:15:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.739 01:15:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 START TEST no_shrink_alloc 00:05:57.739 ************************************ 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.739 01:15:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.569 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.569 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.569 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.569 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6486396 kB' 'MemAvailable: 9463476 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 459356 kB' 'Inactive: 2840108 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118464 kB' 'Mapped: 48276 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166016 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82244 kB' 'KernelStack: 6472 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 344932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.569 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.570 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6486396 kB' 'MemAvailable: 9463476 kB' 'Buffers: 2436 kB' 'Cached: 3180172 kB' 'SwapCached: 0 kB' 'Active: 459568 kB' 'Inactive: 2840108 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118676 kB' 'Mapped: 48216 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 166016 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82244 kB' 'KernelStack: 6456 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.571 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.572 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6486752 kB' 'MemAvailable: 9463836 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459184 kB' 'Inactive: 2840112 kB' 'Active(anon): 127160 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 48060 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 165984 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82212 kB' 'KernelStack: 6528 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.573 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:58.574 nr_hugepages=1024 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:58.574 resv_hugepages=0 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.574 surplus_hugepages=0 00:05:58.574 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.574 anon_hugepages=0 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6486752 kB' 'MemAvailable: 9463836 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459108 kB' 'Inactive: 2840112 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 118016 kB' 'Mapped: 48136 kB' 'Shmem: 10476 kB' 'KReclaimable: 83772 kB' 'Slab: 165940 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82168 kB' 'KernelStack: 6512 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.575 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.576 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:58.836 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6486752 kB' 'MemUsed: 5755220 kB' 'SwapCached: 0 kB' 'Active: 459108 kB' 'Inactive: 2840112 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 3182612 kB' 'Mapped: 48136 kB' 'AnonPages: 118016 kB' 'Shmem: 10476 kB' 'KernelStack: 6512 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83772 kB' 'Slab: 165940 kB' 'SReclaimable: 83772 kB' 'SUnreclaim: 82168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.837 node0=1024 expecting 1024 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.837 01:15:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.405 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.405 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.405 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.405 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.405 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:59.666 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6481924 kB' 'MemAvailable: 9459016 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459508 kB' 'Inactive: 2840112 kB' 'Active(anon): 127484 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118864 kB' 'Mapped: 48044 kB' 'Shmem: 10476 kB' 'KReclaimable: 83788 kB' 'Slab: 165988 kB' 'SReclaimable: 83788 kB' 'SUnreclaim: 82200 kB' 'KernelStack: 6536 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.666 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6481924 kB' 'MemAvailable: 9459016 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459252 kB' 'Inactive: 2840112 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 47944 kB' 'Shmem: 10476 kB' 'KReclaimable: 83788 kB' 'Slab: 165992 kB' 'SReclaimable: 83788 kB' 'SUnreclaim: 82204 kB' 'KernelStack: 6512 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.667 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6481924 kB' 'MemAvailable: 9459016 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 2840112 kB' 'Active(anon): 127396 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118592 kB' 'Mapped: 48204 kB' 'Shmem: 10476 kB' 'KReclaimable: 83788 kB' 'Slab: 165992 kB' 'SReclaimable: 83788 kB' 'SUnreclaim: 82204 kB' 'KernelStack: 6544 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.668 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:59.669 nr_hugepages=1024 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:59.669 resv_hugepages=0 00:05:59.669 surplus_hugepages=0 00:05:59.669 anon_hugepages=0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6482176 kB' 'MemAvailable: 9459268 kB' 'Buffers: 2436 kB' 'Cached: 3180176 kB' 'SwapCached: 0 kB' 'Active: 459304 kB' 'Inactive: 2840112 kB' 'Active(anon): 127280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 48144 kB' 'Shmem: 10476 kB' 'KReclaimable: 83788 kB' 'Slab: 165984 kB' 'SReclaimable: 83788 kB' 'SUnreclaim: 82196 kB' 'KernelStack: 6528 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 202604 kB' 'DirectMap2M: 5040128 kB' 'DirectMap1G: 9437184 kB' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.669 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6482176 kB' 'MemUsed: 5759796 kB' 'SwapCached: 0 kB' 'Active: 459304 kB' 'Inactive: 2840112 kB' 'Active(anon): 127280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2840112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3182612 kB' 'Mapped: 48144 kB' 'AnonPages: 118456 kB' 'Shmem: 10476 kB' 'KernelStack: 6528 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83788 kB' 'Slab: 165984 kB' 'SReclaimable: 83788 kB' 'SUnreclaim: 82196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.670 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.671 node0=1024 expecting 1024 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:59.671 00:05:59.671 real 0m2.014s 00:05:59.671 user 0m0.819s 00:05:59.671 sys 0m1.326s 00:05:59.671 ************************************ 00:05:59.671 END TEST no_shrink_alloc 00:05:59.671 ************************************ 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.671 01:15:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.930 01:15:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:59.930 01:15:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:59.930 01:15:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:59.930 ************************************ 00:05:59.930 END TEST hugepages 00:05:59.930 ************************************ 00:05:59.930 00:05:59.930 real 0m8.598s 00:05:59.930 user 0m3.422s 00:05:59.930 sys 0m5.499s 00:05:59.930 01:15:45 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.930 01:15:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:59.930 01:15:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:59.930 01:15:45 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.930 01:15:45 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.930 01:15:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:59.930 ************************************ 00:05:59.930 START TEST driver 00:05:59.930 ************************************ 00:05:59.930 01:15:45 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:59.930 * Looking for test storage... 00:05:59.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:59.930 01:15:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:59.930 01:15:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:59.930 01:15:45 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:06.504 01:15:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:06.504 01:15:51 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.504 01:15:51 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.504 01:15:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:06.504 ************************************ 00:06:06.504 START TEST guess_driver 00:06:06.504 ************************************ 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:06.504 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:06.504 Looking for driver=uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.504 01:15:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:07.071 01:15:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:07.071 01:15:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:07.071 01:15:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:08.006 01:15:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:14.570 00:06:14.570 real 0m7.986s 00:06:14.570 user 0m0.984s 00:06:14.570 sys 0m2.199s 00:06:14.570 01:15:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.570 01:15:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 ************************************ 00:06:14.570 END TEST guess_driver 00:06:14.570 ************************************ 00:06:14.570 ************************************ 00:06:14.570 END TEST driver 00:06:14.570 ************************************ 00:06:14.570 00:06:14.570 real 0m14.559s 00:06:14.570 user 0m1.476s 00:06:14.570 sys 0m3.473s 00:06:14.570 01:15:59 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.570 01:15:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 01:15:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:14.570 01:15:59 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.570 01:15:59 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.570 01:15:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.570 ************************************ 00:06:14.570 START TEST devices 00:06:14.570 ************************************ 00:06:14.570 01:15:59 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:14.570 * Looking for test storage... 00:06:14.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:14.570 01:15:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:14.570 01:15:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:14.570 01:15:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:14.570 01:15:59 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:16.489 01:16:01 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:16.489 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:16.490 No valid GPT data, bailing 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:16.490 No valid GPT data, bailing 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:16.490 No valid GPT data, bailing 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:16.490 No valid GPT data, bailing 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:16.490 No valid GPT data, bailing 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:16.490 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:16.490 01:16:01 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:16.763 No valid GPT data, bailing 00:06:16.763 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:16.763 01:16:01 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:16.763 01:16:01 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:16.763 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:16.763 01:16:01 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:16.763 01:16:01 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:16.763 01:16:01 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:16.763 01:16:01 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:16.763 01:16:01 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:16.763 01:16:01 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:16.763 01:16:01 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:16.763 01:16:01 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.763 01:16:01 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.763 01:16:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:16.763 ************************************ 00:06:16.763 START TEST nvme_mount 00:06:16.763 ************************************ 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:16.763 01:16:01 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:17.703 Creating new GPT entries in memory. 00:06:17.703 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:17.703 other utilities. 00:06:17.703 01:16:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:17.703 01:16:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.703 01:16:02 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:17.703 01:16:02 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:17.703 01:16:02 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:18.661 Creating new GPT entries in memory. 00:06:18.661 The operation has completed successfully. 00:06:18.661 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:18.661 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:18.661 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71670 00:06:18.920 01:16:03 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.920 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:18.920 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.921 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:18.921 01:16:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.921 01:16:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.180 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:19.438 01:16:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.005 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:20.005 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:20.264 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.264 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.524 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:20.524 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:20.524 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:20.524 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.524 01:16:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:20.783 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.042 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:21.042 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.301 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:21.301 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.301 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:21.301 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.559 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:21.560 01:16:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.819 01:16:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.386 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.644 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.644 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.644 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.644 01:16:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.903 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:22.903 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:23.162 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:23.162 00:06:23.162 real 0m6.530s 00:06:23.162 user 0m1.673s 00:06:23.162 sys 0m2.504s 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.162 ************************************ 00:06:23.162 END TEST nvme_mount 00:06:23.162 01:16:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:23.162 ************************************ 00:06:23.421 01:16:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:23.422 01:16:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.422 01:16:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.422 01:16:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:23.422 ************************************ 00:06:23.422 START TEST dm_mount 00:06:23.422 ************************************ 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:23.422 01:16:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:24.361 Creating new GPT entries in memory. 00:06:24.361 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:24.361 other utilities. 00:06:24.361 01:16:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:24.361 01:16:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:24.361 01:16:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:24.361 01:16:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:24.361 01:16:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:25.301 Creating new GPT entries in memory. 00:06:25.301 The operation has completed successfully. 00:06:25.301 01:16:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:25.301 01:16:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:25.301 01:16:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:25.301 01:16:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:25.301 01:16:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:26.677 The operation has completed successfully. 00:06:26.677 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:26.677 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.677 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72306 00:06:26.677 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:26.677 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.678 01:16:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:26.937 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.195 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.762 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:27.762 01:16:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.762 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:27.762 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:27.762 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:27.762 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:27.762 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.021 01:16:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.280 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:28.539 01:16:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:29.105 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:29.363 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.363 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:29.363 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:29.363 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:29.363 01:16:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:29.363 00:06:29.363 real 0m5.987s 00:06:29.363 user 0m1.170s 00:06:29.363 sys 0m1.702s 00:06:29.364 01:16:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.364 ************************************ 00:06:29.364 END TEST dm_mount 00:06:29.364 ************************************ 00:06:29.364 01:16:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:29.364 01:16:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:29.623 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:29.623 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:29.623 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:29.623 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:29.623 01:16:14 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:29.623 00:06:29.623 real 0m15.154s 00:06:29.623 user 0m3.869s 00:06:29.623 sys 0m5.497s 00:06:29.623 01:16:14 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.623 01:16:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:29.623 ************************************ 00:06:29.623 END TEST devices 00:06:29.623 ************************************ 00:06:29.623 ************************************ 00:06:29.623 END TEST setup.sh 00:06:29.623 ************************************ 00:06:29.623 00:06:29.623 real 0m53.460s 00:06:29.623 user 0m12.485s 00:06:29.623 sys 0m21.009s 00:06:29.623 01:16:14 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.623 01:16:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:29.881 01:16:14 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:30.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:31.012 Hugepages 00:06:31.012 node hugesize free / total 00:06:31.012 node0 1048576kB 0 / 0 00:06:31.012 node0 2048kB 2048 / 2048 00:06:31.012 00:06:31.012 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:31.270 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:31.270 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:31.529 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:31.529 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:31.787 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:31.787 01:16:16 -- spdk/autotest.sh@130 -- # uname -s 00:06:31.787 01:16:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:31.787 01:16:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:31.787 01:16:16 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:32.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.289 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.289 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.289 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.289 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.289 01:16:18 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:34.225 01:16:19 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:34.225 01:16:19 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:34.225 01:16:19 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:34.225 01:16:19 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:34.225 01:16:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:34.225 01:16:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:34.225 01:16:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:34.225 01:16:19 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:34.225 01:16:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:34.484 01:16:19 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:34.484 01:16:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:34.484 01:16:19 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.310 Waiting for block devices as requested 00:06:35.310 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.567 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.567 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.567 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:40.927 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:40.927 01:16:25 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:40.927 01:16:25 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:40.927 01:16:25 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:40.927 01:16:25 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:40.927 01:16:25 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:40.927 01:16:25 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:40.927 01:16:25 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:40.927 01:16:25 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:40.927 01:16:25 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:40.927 01:16:25 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:40.927 01:16:25 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:40.927 01:16:25 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:40.927 01:16:25 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:40.927 01:16:25 -- common/autotest_common.sh@1553 -- # continue 00:06:40.927 01:16:25 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:40.927 01:16:25 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:40.927 01:16:25 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:40.927 01:16:25 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:40.927 01:16:25 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:40.927 01:16:26 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1553 -- # continue 00:06:40.927 01:16:26 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:40.927 01:16:26 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:40.927 01:16:26 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1553 -- # continue 00:06:40.927 01:16:26 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:40.927 01:16:26 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:40.927 01:16:26 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:40.927 01:16:26 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:40.927 01:16:26 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:40.927 01:16:26 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:40.927 01:16:26 -- common/autotest_common.sh@1553 -- # continue 00:06:40.928 01:16:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:40.928 01:16:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.928 01:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.928 01:16:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:40.928 01:16:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:40.928 01:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.928 01:16:26 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:41.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.435 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.695 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:42.695 01:16:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:42.695 01:16:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.695 01:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:42.695 01:16:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:42.695 01:16:27 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:42.695 01:16:27 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:42.695 01:16:27 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:42.695 01:16:27 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:42.695 01:16:27 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:42.695 01:16:27 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:42.695 01:16:27 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:42.695 01:16:27 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:42.695 01:16:27 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:42.695 01:16:27 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:42.956 01:16:28 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:42.956 01:16:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:42.956 01:16:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:42.956 01:16:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.956 01:16:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:42.956 01:16:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.956 01:16:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:42.956 01:16:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.956 01:16:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:42.956 01:16:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:42.956 01:16:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:42.956 01:16:28 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:42.956 01:16:28 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:42.956 01:16:28 -- common/autotest_common.sh@1589 -- # return 0 00:06:42.956 01:16:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:42.956 01:16:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:42.956 01:16:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:42.956 01:16:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:42.956 01:16:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:42.956 01:16:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:42.956 01:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 01:16:28 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:42.956 01:16:28 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:42.956 01:16:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.956 01:16:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.956 01:16:28 -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 ************************************ 00:06:42.956 START TEST env 00:06:42.956 ************************************ 00:06:42.956 01:16:28 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:42.956 * Looking for test storage... 00:06:42.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:42.956 01:16:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:42.956 01:16:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.956 01:16:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.956 01:16:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.214 ************************************ 00:06:43.214 START TEST env_memory 00:06:43.214 ************************************ 00:06:43.214 01:16:28 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:43.214 00:06:43.214 00:06:43.214 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.214 http://cunit.sourceforge.net/ 00:06:43.214 00:06:43.214 00:06:43.214 Suite: memory 00:06:43.215 Test: alloc and free memory map ...[2024-07-21 01:16:28.341364] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:43.215 passed 00:06:43.215 Test: mem map translation ...[2024-07-21 01:16:28.379813] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:43.215 [2024-07-21 01:16:28.379861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:43.215 [2024-07-21 01:16:28.379951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:43.215 [2024-07-21 01:16:28.379976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:43.215 passed 00:06:43.215 Test: mem map registration ...[2024-07-21 01:16:28.440917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:43.215 [2024-07-21 01:16:28.440956] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:43.215 passed 00:06:43.215 Test: mem map adjacent registrations ...passed 00:06:43.215 00:06:43.215 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.215 suites 1 1 n/a 0 0 00:06:43.215 tests 4 4 4 0 0 00:06:43.215 asserts 152 152 152 0 n/a 00:06:43.215 00:06:43.215 Elapsed time = 0.216 seconds 00:06:43.472 00:06:43.472 real 0m0.275s 00:06:43.472 user 0m0.223s 00:06:43.472 sys 0m0.042s 00:06:43.472 ************************************ 00:06:43.472 END TEST env_memory 00:06:43.472 ************************************ 00:06:43.472 01:16:28 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.472 01:16:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:43.472 01:16:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.472 01:16:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.472 01:16:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.472 01:16:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.472 ************************************ 00:06:43.472 START TEST env_vtophys 00:06:43.472 ************************************ 00:06:43.472 01:16:28 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:43.472 EAL: lib.eal log level changed from notice to debug 00:06:43.472 EAL: Detected lcore 0 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 1 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 2 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 3 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 4 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 5 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 6 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 7 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 8 as core 0 on socket 0 00:06:43.473 EAL: Detected lcore 9 as core 0 on socket 0 00:06:43.473 EAL: Maximum logical cores by configuration: 128 00:06:43.473 EAL: Detected CPU lcores: 10 00:06:43.473 EAL: Detected NUMA nodes: 1 00:06:43.473 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:43.473 EAL: Detected shared linkage of DPDK 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:43.473 EAL: Registered [vdev] bus. 00:06:43.473 EAL: bus.vdev log level changed from disabled to notice 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:43.473 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:43.473 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:43.473 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:43.473 EAL: No shared files mode enabled, IPC will be disabled 00:06:43.473 EAL: No shared files mode enabled, IPC is disabled 00:06:43.473 EAL: Selected IOVA mode 'PA' 00:06:43.473 EAL: Probing VFIO support... 00:06:43.473 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:43.473 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:43.473 EAL: Ask a virtual area of 0x2e000 bytes 00:06:43.473 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:43.473 EAL: Setting up physically contiguous memory... 00:06:43.473 EAL: Setting maximum number of open files to 524288 00:06:43.473 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:43.473 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:43.473 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.473 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:43.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.473 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.473 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:43.473 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:43.473 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.473 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:43.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.473 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.473 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:43.473 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:43.473 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.473 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:43.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.473 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.473 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:43.473 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:43.473 EAL: Ask a virtual area of 0x61000 bytes 00:06:43.473 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:43.473 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:43.473 EAL: Ask a virtual area of 0x400000000 bytes 00:06:43.473 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:43.473 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:43.473 EAL: Hugepages will be freed exactly as allocated. 00:06:43.473 EAL: No shared files mode enabled, IPC is disabled 00:06:43.473 EAL: No shared files mode enabled, IPC is disabled 00:06:43.731 EAL: TSC frequency is ~2490000 KHz 00:06:43.731 EAL: Main lcore 0 is ready (tid=7f7905677a40;cpuset=[0]) 00:06:43.731 EAL: Trying to obtain current memory policy. 00:06:43.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.731 EAL: Restoring previous memory policy: 0 00:06:43.731 EAL: request: mp_malloc_sync 00:06:43.731 EAL: No shared files mode enabled, IPC is disabled 00:06:43.731 EAL: Heap on socket 0 was expanded by 2MB 00:06:43.731 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:43.731 EAL: No shared files mode enabled, IPC is disabled 00:06:43.731 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:43.731 EAL: Mem event callback 'spdk:(nil)' registered 00:06:43.731 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:43.731 00:06:43.731 00:06:43.731 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.731 http://cunit.sourceforge.net/ 00:06:43.731 00:06:43.731 00:06:43.731 Suite: components_suite 00:06:44.296 Test: vtophys_malloc_test ...passed 00:06:44.296 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 34MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 66MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 66MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.296 EAL: Restoring previous memory policy: 4 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was expanded by 130MB 00:06:44.296 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.296 EAL: request: mp_malloc_sync 00:06:44.296 EAL: No shared files mode enabled, IPC is disabled 00:06:44.296 EAL: Heap on socket 0 was shrunk by 130MB 00:06:44.296 EAL: Trying to obtain current memory policy. 00:06:44.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.555 EAL: Restoring previous memory policy: 4 00:06:44.555 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.555 EAL: request: mp_malloc_sync 00:06:44.555 EAL: No shared files mode enabled, IPC is disabled 00:06:44.555 EAL: Heap on socket 0 was expanded by 258MB 00:06:44.555 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.555 EAL: request: mp_malloc_sync 00:06:44.555 EAL: No shared files mode enabled, IPC is disabled 00:06:44.555 EAL: Heap on socket 0 was shrunk by 258MB 00:06:44.555 EAL: Trying to obtain current memory policy. 00:06:44.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.813 EAL: Restoring previous memory policy: 4 00:06:44.813 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.813 EAL: request: mp_malloc_sync 00:06:44.813 EAL: No shared files mode enabled, IPC is disabled 00:06:44.813 EAL: Heap on socket 0 was expanded by 514MB 00:06:44.813 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.071 EAL: request: mp_malloc_sync 00:06:45.071 EAL: No shared files mode enabled, IPC is disabled 00:06:45.071 EAL: Heap on socket 0 was shrunk by 514MB 00:06:45.071 EAL: Trying to obtain current memory policy. 00:06:45.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.329 EAL: Restoring previous memory policy: 4 00:06:45.329 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.329 EAL: request: mp_malloc_sync 00:06:45.329 EAL: No shared files mode enabled, IPC is disabled 00:06:45.329 EAL: Heap on socket 0 was expanded by 1026MB 00:06:45.586 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.843 passed 00:06:45.844 00:06:45.844 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.844 suites 1 1 n/a 0 0 00:06:45.844 tests 2 2 2 0 0 00:06:45.844 asserts 5386 5386 5386 0 n/a 00:06:45.844 00:06:45.844 Elapsed time = 2.258 seconds 00:06:45.844 EAL: request: mp_malloc_sync 00:06:45.844 EAL: No shared files mode enabled, IPC is disabled 00:06:45.844 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.844 EAL: Calling mem event callback 'spdk:(nil)' 00:06:46.101 EAL: request: mp_malloc_sync 00:06:46.101 EAL: No shared files mode enabled, IPC is disabled 00:06:46.101 EAL: Heap on socket 0 was shrunk by 2MB 00:06:46.101 EAL: No shared files mode enabled, IPC is disabled 00:06:46.101 EAL: No shared files mode enabled, IPC is disabled 00:06:46.101 EAL: No shared files mode enabled, IPC is disabled 00:06:46.101 00:06:46.101 real 0m2.555s 00:06:46.101 user 0m1.273s 00:06:46.101 sys 0m1.139s 00:06:46.101 ************************************ 00:06:46.101 END TEST env_vtophys 00:06:46.101 ************************************ 00:06:46.101 01:16:31 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.101 01:16:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:46.101 01:16:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:46.101 01:16:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.101 01:16:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.101 01:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.101 ************************************ 00:06:46.101 START TEST env_pci 00:06:46.101 ************************************ 00:06:46.101 01:16:31 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:46.101 00:06:46.101 00:06:46.101 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.101 http://cunit.sourceforge.net/ 00:06:46.101 00:06:46.101 00:06:46.101 Suite: pci 00:06:46.101 Test: pci_hook ...[2024-07-21 01:16:31.299205] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74115 has claimed it 00:06:46.101 passed 00:06:46.101 00:06:46.101 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.101 suites 1 1 n/a 0 0 00:06:46.101 tests 1 1 1 0 0 00:06:46.101 asserts 25 25 25 0 n/a 00:06:46.101 00:06:46.101 Elapsed time = 0.008 seconds 00:06:46.101 EAL: Cannot find device (10000:00:01.0) 00:06:46.101 EAL: Failed to attach device on primary process 00:06:46.101 00:06:46.101 real 0m0.106s 00:06:46.101 user 0m0.047s 00:06:46.101 sys 0m0.059s 00:06:46.101 ************************************ 00:06:46.101 END TEST env_pci 00:06:46.101 ************************************ 00:06:46.101 01:16:31 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.102 01:16:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:46.359 01:16:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:46.359 01:16:31 env -- env/env.sh@15 -- # uname 00:06:46.359 01:16:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:46.359 01:16:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:46.359 01:16:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:46.359 01:16:31 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:46.359 01:16:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.359 01:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.359 ************************************ 00:06:46.359 START TEST env_dpdk_post_init 00:06:46.359 ************************************ 00:06:46.359 01:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:46.359 EAL: Detected CPU lcores: 10 00:06:46.359 EAL: Detected NUMA nodes: 1 00:06:46.359 EAL: Detected shared linkage of DPDK 00:06:46.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.359 EAL: Selected IOVA mode 'PA' 00:06:46.359 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.617 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:46.617 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:46.617 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:46.617 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:46.617 Starting DPDK initialization... 00:06:46.617 Starting SPDK post initialization... 00:06:46.617 SPDK NVMe probe 00:06:46.617 Attaching to 0000:00:10.0 00:06:46.618 Attaching to 0000:00:11.0 00:06:46.618 Attaching to 0000:00:12.0 00:06:46.618 Attaching to 0000:00:13.0 00:06:46.618 Attached to 0000:00:10.0 00:06:46.618 Attached to 0000:00:11.0 00:06:46.618 Attached to 0000:00:13.0 00:06:46.618 Attached to 0000:00:12.0 00:06:46.618 Cleaning up... 00:06:46.618 00:06:46.618 real 0m0.289s 00:06:46.618 user 0m0.083s 00:06:46.618 sys 0m0.109s 00:06:46.618 01:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.618 ************************************ 00:06:46.618 END TEST env_dpdk_post_init 00:06:46.618 ************************************ 00:06:46.618 01:16:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 01:16:31 env -- env/env.sh@26 -- # uname 00:06:46.618 01:16:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:46.618 01:16:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.618 01:16:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.618 01:16:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.618 01:16:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.618 ************************************ 00:06:46.618 START TEST env_mem_callbacks 00:06:46.618 ************************************ 00:06:46.618 01:16:31 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.618 EAL: Detected CPU lcores: 10 00:06:46.618 EAL: Detected NUMA nodes: 1 00:06:46.618 EAL: Detected shared linkage of DPDK 00:06:46.618 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.618 EAL: Selected IOVA mode 'PA' 00:06:46.876 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.876 00:06:46.876 00:06:46.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.876 http://cunit.sourceforge.net/ 00:06:46.876 00:06:46.876 00:06:46.876 Suite: memory 00:06:46.876 Test: test ... 00:06:46.876 register 0x200000200000 2097152 00:06:46.876 malloc 3145728 00:06:46.876 register 0x200000400000 4194304 00:06:46.876 buf 0x200000500000 len 3145728 PASSED 00:06:46.876 malloc 64 00:06:46.876 buf 0x2000004fff40 len 64 PASSED 00:06:46.876 malloc 4194304 00:06:46.876 register 0x200000800000 6291456 00:06:46.876 buf 0x200000a00000 len 4194304 PASSED 00:06:46.876 free 0x200000500000 3145728 00:06:46.876 free 0x2000004fff40 64 00:06:46.876 unregister 0x200000400000 4194304 PASSED 00:06:46.876 free 0x200000a00000 4194304 00:06:46.876 unregister 0x200000800000 6291456 PASSED 00:06:46.876 malloc 8388608 00:06:46.876 register 0x200000400000 10485760 00:06:46.876 buf 0x200000600000 len 8388608 PASSED 00:06:46.876 free 0x200000600000 8388608 00:06:46.876 unregister 0x200000400000 10485760 PASSED 00:06:46.876 passed 00:06:46.876 00:06:46.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.876 suites 1 1 n/a 0 0 00:06:46.876 tests 1 1 1 0 0 00:06:46.876 asserts 15 15 15 0 n/a 00:06:46.876 00:06:46.876 Elapsed time = 0.014 seconds 00:06:46.876 00:06:46.876 real 0m0.222s 00:06:46.876 user 0m0.043s 00:06:46.876 sys 0m0.078s 00:06:46.876 01:16:32 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.876 ************************************ 00:06:46.876 END TEST env_mem_callbacks 00:06:46.876 01:16:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:46.876 ************************************ 00:06:46.876 00:06:46.876 real 0m3.993s 00:06:46.876 user 0m1.833s 00:06:46.876 sys 0m1.795s 00:06:46.876 01:16:32 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.876 01:16:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.876 ************************************ 00:06:46.876 END TEST env 00:06:46.876 ************************************ 00:06:46.876 01:16:32 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.876 01:16:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.876 01:16:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.876 01:16:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.135 ************************************ 00:06:47.135 START TEST rpc 00:06:47.135 ************************************ 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:47.135 * Looking for test storage... 00:06:47.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:47.135 01:16:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74234 00:06:47.135 01:16:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:47.135 01:16:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.135 01:16:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74234 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@827 -- # '[' -z 74234 ']' 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.135 01:16:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.135 [2024-07-21 01:16:32.440506] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:47.135 [2024-07-21 01:16:32.440634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74234 ] 00:06:47.393 [2024-07-21 01:16:32.609083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.393 [2024-07-21 01:16:32.675233] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:47.393 [2024-07-21 01:16:32.675313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74234' to capture a snapshot of events at runtime. 00:06:47.393 [2024-07-21 01:16:32.675327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.393 [2024-07-21 01:16:32.675356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.393 [2024-07-21 01:16:32.675371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74234 for offline analysis/debug. 00:06:47.393 [2024-07-21 01:16:32.675431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.961 01:16:33 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.961 01:16:33 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.961 01:16:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:47.961 01:16:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:47.961 01:16:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:47.961 01:16:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:47.961 01:16:33 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.961 01:16:33 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.961 01:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.961 ************************************ 00:06:47.961 START TEST rpc_integrity 00:06:47.961 ************************************ 00:06:47.961 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:47.961 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.961 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.961 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.961 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.961 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.961 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.220 { 00:06:48.220 "name": "Malloc0", 00:06:48.220 "aliases": [ 00:06:48.220 "a981101f-fd0a-4204-a1c4-2b9bb45ccef2" 00:06:48.220 ], 00:06:48.220 "product_name": "Malloc disk", 00:06:48.220 "block_size": 512, 00:06:48.220 "num_blocks": 16384, 00:06:48.220 "uuid": "a981101f-fd0a-4204-a1c4-2b9bb45ccef2", 00:06:48.220 "assigned_rate_limits": { 00:06:48.220 "rw_ios_per_sec": 0, 00:06:48.220 "rw_mbytes_per_sec": 0, 00:06:48.220 "r_mbytes_per_sec": 0, 00:06:48.220 "w_mbytes_per_sec": 0 00:06:48.220 }, 00:06:48.220 "claimed": false, 00:06:48.220 "zoned": false, 00:06:48.220 "supported_io_types": { 00:06:48.220 "read": true, 00:06:48.220 "write": true, 00:06:48.220 "unmap": true, 00:06:48.220 "write_zeroes": true, 00:06:48.220 "flush": true, 00:06:48.220 "reset": true, 00:06:48.220 "compare": false, 00:06:48.220 "compare_and_write": false, 00:06:48.220 "abort": true, 00:06:48.220 "nvme_admin": false, 00:06:48.220 "nvme_io": false 00:06:48.220 }, 00:06:48.220 "memory_domains": [ 00:06:48.220 { 00:06:48.220 "dma_device_id": "system", 00:06:48.220 "dma_device_type": 1 00:06:48.220 }, 00:06:48.220 { 00:06:48.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.220 "dma_device_type": 2 00:06:48.220 } 00:06:48.220 ], 00:06:48.220 "driver_specific": {} 00:06:48.220 } 00:06:48.220 ]' 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 [2024-07-21 01:16:33.373513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:48.220 [2024-07-21 01:16:33.373588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.220 [2024-07-21 01:16:33.373634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:48.220 [2024-07-21 01:16:33.373660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.220 [2024-07-21 01:16:33.376552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.220 [2024-07-21 01:16:33.376591] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.220 Passthru0 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.220 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.220 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.220 { 00:06:48.220 "name": "Malloc0", 00:06:48.220 "aliases": [ 00:06:48.220 "a981101f-fd0a-4204-a1c4-2b9bb45ccef2" 00:06:48.220 ], 00:06:48.220 "product_name": "Malloc disk", 00:06:48.220 "block_size": 512, 00:06:48.220 "num_blocks": 16384, 00:06:48.220 "uuid": "a981101f-fd0a-4204-a1c4-2b9bb45ccef2", 00:06:48.220 "assigned_rate_limits": { 00:06:48.220 "rw_ios_per_sec": 0, 00:06:48.220 "rw_mbytes_per_sec": 0, 00:06:48.220 "r_mbytes_per_sec": 0, 00:06:48.220 "w_mbytes_per_sec": 0 00:06:48.220 }, 00:06:48.220 "claimed": true, 00:06:48.220 "claim_type": "exclusive_write", 00:06:48.220 "zoned": false, 00:06:48.220 "supported_io_types": { 00:06:48.220 "read": true, 00:06:48.220 "write": true, 00:06:48.220 "unmap": true, 00:06:48.220 "write_zeroes": true, 00:06:48.220 "flush": true, 00:06:48.220 "reset": true, 00:06:48.220 "compare": false, 00:06:48.220 "compare_and_write": false, 00:06:48.220 "abort": true, 00:06:48.220 "nvme_admin": false, 00:06:48.220 "nvme_io": false 00:06:48.220 }, 00:06:48.220 "memory_domains": [ 00:06:48.220 { 00:06:48.220 "dma_device_id": "system", 00:06:48.220 "dma_device_type": 1 00:06:48.220 }, 00:06:48.220 { 00:06:48.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.220 "dma_device_type": 2 00:06:48.220 } 00:06:48.220 ], 00:06:48.220 "driver_specific": {} 00:06:48.220 }, 00:06:48.220 { 00:06:48.220 "name": "Passthru0", 00:06:48.220 "aliases": [ 00:06:48.220 "84fc3e99-83dd-5ee3-9c4c-63740e9e7ebf" 00:06:48.220 ], 00:06:48.220 "product_name": "passthru", 00:06:48.220 "block_size": 512, 00:06:48.220 "num_blocks": 16384, 00:06:48.220 "uuid": "84fc3e99-83dd-5ee3-9c4c-63740e9e7ebf", 00:06:48.220 "assigned_rate_limits": { 00:06:48.220 "rw_ios_per_sec": 0, 00:06:48.220 "rw_mbytes_per_sec": 0, 00:06:48.220 "r_mbytes_per_sec": 0, 00:06:48.220 "w_mbytes_per_sec": 0 00:06:48.220 }, 00:06:48.220 "claimed": false, 00:06:48.220 "zoned": false, 00:06:48.220 "supported_io_types": { 00:06:48.220 "read": true, 00:06:48.220 "write": true, 00:06:48.220 "unmap": true, 00:06:48.220 "write_zeroes": true, 00:06:48.220 "flush": true, 00:06:48.220 "reset": true, 00:06:48.220 "compare": false, 00:06:48.220 "compare_and_write": false, 00:06:48.220 "abort": true, 00:06:48.220 "nvme_admin": false, 00:06:48.220 "nvme_io": false 00:06:48.220 }, 00:06:48.220 "memory_domains": [ 00:06:48.220 { 00:06:48.220 "dma_device_id": "system", 00:06:48.220 "dma_device_type": 1 00:06:48.220 }, 00:06:48.220 { 00:06:48.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.220 "dma_device_type": 2 00:06:48.220 } 00:06:48.220 ], 00:06:48.220 "driver_specific": { 00:06:48.220 "passthru": { 00:06:48.220 "name": "Passthru0", 00:06:48.221 "base_bdev_name": "Malloc0" 00:06:48.221 } 00:06:48.221 } 00:06:48.221 } 00:06:48.221 ]' 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:48.221 01:16:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:48.221 00:06:48.221 real 0m0.305s 00:06:48.221 user 0m0.171s 00:06:48.221 sys 0m0.063s 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.221 01:16:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.221 ************************************ 00:06:48.221 END TEST rpc_integrity 00:06:48.221 ************************************ 00:06:48.479 01:16:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:48.479 01:16:33 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.479 01:16:33 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.479 01:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 ************************************ 00:06:48.479 START TEST rpc_plugins 00:06:48.479 ************************************ 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:48.479 { 00:06:48.479 "name": "Malloc1", 00:06:48.479 "aliases": [ 00:06:48.479 "31217997-293a-4001-8bed-e57cb91708b7" 00:06:48.479 ], 00:06:48.479 "product_name": "Malloc disk", 00:06:48.479 "block_size": 4096, 00:06:48.479 "num_blocks": 256, 00:06:48.479 "uuid": "31217997-293a-4001-8bed-e57cb91708b7", 00:06:48.479 "assigned_rate_limits": { 00:06:48.479 "rw_ios_per_sec": 0, 00:06:48.479 "rw_mbytes_per_sec": 0, 00:06:48.479 "r_mbytes_per_sec": 0, 00:06:48.479 "w_mbytes_per_sec": 0 00:06:48.479 }, 00:06:48.479 "claimed": false, 00:06:48.479 "zoned": false, 00:06:48.479 "supported_io_types": { 00:06:48.479 "read": true, 00:06:48.479 "write": true, 00:06:48.479 "unmap": true, 00:06:48.479 "write_zeroes": true, 00:06:48.479 "flush": true, 00:06:48.479 "reset": true, 00:06:48.479 "compare": false, 00:06:48.479 "compare_and_write": false, 00:06:48.479 "abort": true, 00:06:48.479 "nvme_admin": false, 00:06:48.479 "nvme_io": false 00:06:48.479 }, 00:06:48.479 "memory_domains": [ 00:06:48.479 { 00:06:48.479 "dma_device_id": "system", 00:06:48.479 "dma_device_type": 1 00:06:48.479 }, 00:06:48.479 { 00:06:48.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.479 "dma_device_type": 2 00:06:48.479 } 00:06:48.479 ], 00:06:48.479 "driver_specific": {} 00:06:48.479 } 00:06:48.479 ]' 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:48.479 01:16:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:48.479 00:06:48.479 real 0m0.157s 00:06:48.479 user 0m0.094s 00:06:48.479 sys 0m0.028s 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.479 ************************************ 00:06:48.479 END TEST rpc_plugins 00:06:48.479 01:16:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 ************************************ 00:06:48.738 01:16:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:48.738 01:16:33 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.738 01:16:33 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.738 01:16:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.738 ************************************ 00:06:48.738 START TEST rpc_trace_cmd_test 00:06:48.738 ************************************ 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:48.738 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74234", 00:06:48.738 "tpoint_group_mask": "0x8", 00:06:48.738 "iscsi_conn": { 00:06:48.738 "mask": "0x2", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "scsi": { 00:06:48.738 "mask": "0x4", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "bdev": { 00:06:48.738 "mask": "0x8", 00:06:48.738 "tpoint_mask": "0xffffffffffffffff" 00:06:48.738 }, 00:06:48.738 "nvmf_rdma": { 00:06:48.738 "mask": "0x10", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "nvmf_tcp": { 00:06:48.738 "mask": "0x20", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "ftl": { 00:06:48.738 "mask": "0x40", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "blobfs": { 00:06:48.738 "mask": "0x80", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "dsa": { 00:06:48.738 "mask": "0x200", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "thread": { 00:06:48.738 "mask": "0x400", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "nvme_pcie": { 00:06:48.738 "mask": "0x800", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "iaa": { 00:06:48.738 "mask": "0x1000", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "nvme_tcp": { 00:06:48.738 "mask": "0x2000", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "bdev_nvme": { 00:06:48.738 "mask": "0x4000", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 }, 00:06:48.738 "sock": { 00:06:48.738 "mask": "0x8000", 00:06:48.738 "tpoint_mask": "0x0" 00:06:48.738 } 00:06:48.738 }' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:48.738 01:16:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:48.738 01:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:48.738 01:16:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:48.738 00:06:48.738 real 0m0.197s 00:06:48.738 user 0m0.157s 00:06:48.738 sys 0m0.028s 00:06:48.738 01:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.738 ************************************ 00:06:48.738 END TEST rpc_trace_cmd_test 00:06:48.738 ************************************ 00:06:48.738 01:16:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.997 01:16:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:48.997 01:16:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:48.997 01:16:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:48.997 01:16:34 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.997 01:16:34 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.997 01:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.997 ************************************ 00:06:48.997 START TEST rpc_daemon_integrity 00:06:48.997 ************************************ 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.997 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:48.998 { 00:06:48.998 "name": "Malloc2", 00:06:48.998 "aliases": [ 00:06:48.998 "a3055956-9295-4104-a8f3-ed34bdd596ad" 00:06:48.998 ], 00:06:48.998 "product_name": "Malloc disk", 00:06:48.998 "block_size": 512, 00:06:48.998 "num_blocks": 16384, 00:06:48.998 "uuid": "a3055956-9295-4104-a8f3-ed34bdd596ad", 00:06:48.998 "assigned_rate_limits": { 00:06:48.998 "rw_ios_per_sec": 0, 00:06:48.998 "rw_mbytes_per_sec": 0, 00:06:48.998 "r_mbytes_per_sec": 0, 00:06:48.998 "w_mbytes_per_sec": 0 00:06:48.998 }, 00:06:48.998 "claimed": false, 00:06:48.998 "zoned": false, 00:06:48.998 "supported_io_types": { 00:06:48.998 "read": true, 00:06:48.998 "write": true, 00:06:48.998 "unmap": true, 00:06:48.998 "write_zeroes": true, 00:06:48.998 "flush": true, 00:06:48.998 "reset": true, 00:06:48.998 "compare": false, 00:06:48.998 "compare_and_write": false, 00:06:48.998 "abort": true, 00:06:48.998 "nvme_admin": false, 00:06:48.998 "nvme_io": false 00:06:48.998 }, 00:06:48.998 "memory_domains": [ 00:06:48.998 { 00:06:48.998 "dma_device_id": "system", 00:06:48.998 "dma_device_type": 1 00:06:48.998 }, 00:06:48.998 { 00:06:48.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.998 "dma_device_type": 2 00:06:48.998 } 00:06:48.998 ], 00:06:48.998 "driver_specific": {} 00:06:48.998 } 00:06:48.998 ]' 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.998 [2024-07-21 01:16:34.232975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:48.998 [2024-07-21 01:16:34.233030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.998 [2024-07-21 01:16:34.233051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:48.998 [2024-07-21 01:16:34.233066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.998 [2024-07-21 01:16:34.235713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.998 [2024-07-21 01:16:34.235754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:48.998 Passthru0 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:48.998 { 00:06:48.998 "name": "Malloc2", 00:06:48.998 "aliases": [ 00:06:48.998 "a3055956-9295-4104-a8f3-ed34bdd596ad" 00:06:48.998 ], 00:06:48.998 "product_name": "Malloc disk", 00:06:48.998 "block_size": 512, 00:06:48.998 "num_blocks": 16384, 00:06:48.998 "uuid": "a3055956-9295-4104-a8f3-ed34bdd596ad", 00:06:48.998 "assigned_rate_limits": { 00:06:48.998 "rw_ios_per_sec": 0, 00:06:48.998 "rw_mbytes_per_sec": 0, 00:06:48.998 "r_mbytes_per_sec": 0, 00:06:48.998 "w_mbytes_per_sec": 0 00:06:48.998 }, 00:06:48.998 "claimed": true, 00:06:48.998 "claim_type": "exclusive_write", 00:06:48.998 "zoned": false, 00:06:48.998 "supported_io_types": { 00:06:48.998 "read": true, 00:06:48.998 "write": true, 00:06:48.998 "unmap": true, 00:06:48.998 "write_zeroes": true, 00:06:48.998 "flush": true, 00:06:48.998 "reset": true, 00:06:48.998 "compare": false, 00:06:48.998 "compare_and_write": false, 00:06:48.998 "abort": true, 00:06:48.998 "nvme_admin": false, 00:06:48.998 "nvme_io": false 00:06:48.998 }, 00:06:48.998 "memory_domains": [ 00:06:48.998 { 00:06:48.998 "dma_device_id": "system", 00:06:48.998 "dma_device_type": 1 00:06:48.998 }, 00:06:48.998 { 00:06:48.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.998 "dma_device_type": 2 00:06:48.998 } 00:06:48.998 ], 00:06:48.998 "driver_specific": {} 00:06:48.998 }, 00:06:48.998 { 00:06:48.998 "name": "Passthru0", 00:06:48.998 "aliases": [ 00:06:48.998 "18d0a913-794c-5e88-9448-f68254f80e29" 00:06:48.998 ], 00:06:48.998 "product_name": "passthru", 00:06:48.998 "block_size": 512, 00:06:48.998 "num_blocks": 16384, 00:06:48.998 "uuid": "18d0a913-794c-5e88-9448-f68254f80e29", 00:06:48.998 "assigned_rate_limits": { 00:06:48.998 "rw_ios_per_sec": 0, 00:06:48.998 "rw_mbytes_per_sec": 0, 00:06:48.998 "r_mbytes_per_sec": 0, 00:06:48.998 "w_mbytes_per_sec": 0 00:06:48.998 }, 00:06:48.998 "claimed": false, 00:06:48.998 "zoned": false, 00:06:48.998 "supported_io_types": { 00:06:48.998 "read": true, 00:06:48.998 "write": true, 00:06:48.998 "unmap": true, 00:06:48.998 "write_zeroes": true, 00:06:48.998 "flush": true, 00:06:48.998 "reset": true, 00:06:48.998 "compare": false, 00:06:48.998 "compare_and_write": false, 00:06:48.998 "abort": true, 00:06:48.998 "nvme_admin": false, 00:06:48.998 "nvme_io": false 00:06:48.998 }, 00:06:48.998 "memory_domains": [ 00:06:48.998 { 00:06:48.998 "dma_device_id": "system", 00:06:48.998 "dma_device_type": 1 00:06:48.998 }, 00:06:48.998 { 00:06:48.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.998 "dma_device_type": 2 00:06:48.998 } 00:06:48.998 ], 00:06:48.998 "driver_specific": { 00:06:48.998 "passthru": { 00:06:48.998 "name": "Passthru0", 00:06:48.998 "base_bdev_name": "Malloc2" 00:06:48.998 } 00:06:48.998 } 00:06:48.998 } 00:06:48.998 ]' 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.998 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.257 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.258 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.258 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:49.258 01:16:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.258 00:06:49.258 real 0m0.281s 00:06:49.258 user 0m0.162s 00:06:49.258 sys 0m0.055s 00:06:49.258 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.258 01:16:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.258 ************************************ 00:06:49.258 END TEST rpc_daemon_integrity 00:06:49.258 ************************************ 00:06:49.258 01:16:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:49.258 01:16:34 rpc -- rpc/rpc.sh@84 -- # killprocess 74234 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@946 -- # '[' -z 74234 ']' 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@950 -- # kill -0 74234 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@951 -- # uname 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74234 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.258 killing process with pid 74234 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74234' 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@965 -- # kill 74234 00:06:49.258 01:16:34 rpc -- common/autotest_common.sh@970 -- # wait 74234 00:06:49.826 00:06:49.826 real 0m2.870s 00:06:49.826 user 0m3.154s 00:06:49.826 sys 0m1.046s 00:06:49.826 01:16:35 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.826 01:16:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.826 ************************************ 00:06:49.826 END TEST rpc 00:06:49.826 ************************************ 00:06:49.826 01:16:35 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:49.826 01:16:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.826 01:16:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.826 01:16:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.826 ************************************ 00:06:49.827 START TEST skip_rpc 00:06:49.827 ************************************ 00:06:49.827 01:16:35 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:50.086 * Looking for test storage... 00:06:50.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:50.086 01:16:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:50.086 01:16:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:50.086 01:16:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:50.086 01:16:35 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.086 01:16:35 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.086 01:16:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.086 ************************************ 00:06:50.086 START TEST skip_rpc 00:06:50.086 ************************************ 00:06:50.086 01:16:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:50.086 01:16:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74428 00:06:50.086 01:16:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:50.086 01:16:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.086 01:16:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:50.086 [2024-07-21 01:16:35.384356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.086 [2024-07-21 01:16:35.384457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74428 ] 00:06:50.345 [2024-07-21 01:16:35.553203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.345 [2024-07-21 01:16:35.628284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74428 00:06:55.613 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74428 ']' 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74428 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74428 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.614 killing process with pid 74428 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74428' 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74428 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74428 00:06:55.614 00:06:55.614 real 0m5.638s 00:06:55.614 user 0m5.052s 00:06:55.614 sys 0m0.505s 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.614 01:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.614 ************************************ 00:06:55.614 END TEST skip_rpc 00:06:55.614 ************************************ 00:06:55.872 01:16:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:55.872 01:16:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.872 01:16:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.872 01:16:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 START TEST skip_rpc_with_json 00:06:55.872 ************************************ 00:06:55.872 01:16:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:55.872 01:16:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74521 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74521 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74521 ']' 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.872 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 [2024-07-21 01:16:41.113096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:55.872 [2024-07-21 01:16:41.113248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74521 ] 00:06:56.130 [2024-07-21 01:16:41.285794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.130 [2024-07-21 01:16:41.355934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.698 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.698 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:56.698 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:56.698 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.698 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.698 [2024-07-21 01:16:41.912491] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:56.698 request: 00:06:56.698 { 00:06:56.698 "trtype": "tcp", 00:06:56.698 "method": "nvmf_get_transports", 00:06:56.698 "req_id": 1 00:06:56.698 } 00:06:56.698 Got JSON-RPC error response 00:06:56.698 response: 00:06:56.698 { 00:06:56.698 "code": -19, 00:06:56.699 "message": "No such device" 00:06:56.699 } 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.699 [2024-07-21 01:16:41.924564] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.699 01:16:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.958 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.958 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:56.958 { 00:06:56.958 "subsystems": [ 00:06:56.958 { 00:06:56.958 "subsystem": "keyring", 00:06:56.958 "config": [] 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "subsystem": "iobuf", 00:06:56.958 "config": [ 00:06:56.958 { 00:06:56.958 "method": "iobuf_set_options", 00:06:56.958 "params": { 00:06:56.958 "small_pool_count": 8192, 00:06:56.958 "large_pool_count": 1024, 00:06:56.958 "small_bufsize": 8192, 00:06:56.958 "large_bufsize": 135168 00:06:56.958 } 00:06:56.958 } 00:06:56.958 ] 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "subsystem": "sock", 00:06:56.958 "config": [ 00:06:56.958 { 00:06:56.958 "method": "sock_set_default_impl", 00:06:56.958 "params": { 00:06:56.958 "impl_name": "posix" 00:06:56.958 } 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "method": "sock_impl_set_options", 00:06:56.958 "params": { 00:06:56.958 "impl_name": "ssl", 00:06:56.958 "recv_buf_size": 4096, 00:06:56.958 "send_buf_size": 4096, 00:06:56.958 "enable_recv_pipe": true, 00:06:56.958 "enable_quickack": false, 00:06:56.958 "enable_placement_id": 0, 00:06:56.958 "enable_zerocopy_send_server": true, 00:06:56.958 "enable_zerocopy_send_client": false, 00:06:56.958 "zerocopy_threshold": 0, 00:06:56.958 "tls_version": 0, 00:06:56.958 "enable_ktls": false 00:06:56.958 } 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "method": "sock_impl_set_options", 00:06:56.958 "params": { 00:06:56.958 "impl_name": "posix", 00:06:56.958 "recv_buf_size": 2097152, 00:06:56.958 "send_buf_size": 2097152, 00:06:56.958 "enable_recv_pipe": true, 00:06:56.958 "enable_quickack": false, 00:06:56.958 "enable_placement_id": 0, 00:06:56.958 "enable_zerocopy_send_server": true, 00:06:56.958 "enable_zerocopy_send_client": false, 00:06:56.958 "zerocopy_threshold": 0, 00:06:56.958 "tls_version": 0, 00:06:56.958 "enable_ktls": false 00:06:56.958 } 00:06:56.958 } 00:06:56.958 ] 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "subsystem": "vmd", 00:06:56.958 "config": [] 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "subsystem": "accel", 00:06:56.958 "config": [ 00:06:56.958 { 00:06:56.958 "method": "accel_set_options", 00:06:56.958 "params": { 00:06:56.958 "small_cache_size": 128, 00:06:56.958 "large_cache_size": 16, 00:06:56.958 "task_count": 2048, 00:06:56.958 "sequence_count": 2048, 00:06:56.958 "buf_count": 2048 00:06:56.958 } 00:06:56.958 } 00:06:56.958 ] 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "subsystem": "bdev", 00:06:56.958 "config": [ 00:06:56.958 { 00:06:56.958 "method": "bdev_set_options", 00:06:56.958 "params": { 00:06:56.958 "bdev_io_pool_size": 65535, 00:06:56.958 "bdev_io_cache_size": 256, 00:06:56.958 "bdev_auto_examine": true, 00:06:56.958 "iobuf_small_cache_size": 128, 00:06:56.958 "iobuf_large_cache_size": 16 00:06:56.958 } 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "method": "bdev_raid_set_options", 00:06:56.958 "params": { 00:06:56.958 "process_window_size_kb": 1024 00:06:56.958 } 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "method": "bdev_iscsi_set_options", 00:06:56.958 "params": { 00:06:56.958 "timeout_sec": 30 00:06:56.958 } 00:06:56.958 }, 00:06:56.958 { 00:06:56.958 "method": "bdev_nvme_set_options", 00:06:56.958 "params": { 00:06:56.959 "action_on_timeout": "none", 00:06:56.959 "timeout_us": 0, 00:06:56.959 "timeout_admin_us": 0, 00:06:56.959 "keep_alive_timeout_ms": 10000, 00:06:56.959 "arbitration_burst": 0, 00:06:56.959 "low_priority_weight": 0, 00:06:56.959 "medium_priority_weight": 0, 00:06:56.959 "high_priority_weight": 0, 00:06:56.959 "nvme_adminq_poll_period_us": 10000, 00:06:56.959 "nvme_ioq_poll_period_us": 0, 00:06:56.959 "io_queue_requests": 0, 00:06:56.959 "delay_cmd_submit": true, 00:06:56.959 "transport_retry_count": 4, 00:06:56.959 "bdev_retry_count": 3, 00:06:56.959 "transport_ack_timeout": 0, 00:06:56.959 "ctrlr_loss_timeout_sec": 0, 00:06:56.959 "reconnect_delay_sec": 0, 00:06:56.959 "fast_io_fail_timeout_sec": 0, 00:06:56.959 "disable_auto_failback": false, 00:06:56.959 "generate_uuids": false, 00:06:56.959 "transport_tos": 0, 00:06:56.959 "nvme_error_stat": false, 00:06:56.959 "rdma_srq_size": 0, 00:06:56.959 "io_path_stat": false, 00:06:56.959 "allow_accel_sequence": false, 00:06:56.959 "rdma_max_cq_size": 0, 00:06:56.959 "rdma_cm_event_timeout_ms": 0, 00:06:56.959 "dhchap_digests": [ 00:06:56.959 "sha256", 00:06:56.959 "sha384", 00:06:56.959 "sha512" 00:06:56.959 ], 00:06:56.959 "dhchap_dhgroups": [ 00:06:56.959 "null", 00:06:56.959 "ffdhe2048", 00:06:56.959 "ffdhe3072", 00:06:56.959 "ffdhe4096", 00:06:56.959 "ffdhe6144", 00:06:56.959 "ffdhe8192" 00:06:56.959 ] 00:06:56.959 } 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "method": "bdev_nvme_set_hotplug", 00:06:56.959 "params": { 00:06:56.959 "period_us": 100000, 00:06:56.959 "enable": false 00:06:56.959 } 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "method": "bdev_wait_for_examine" 00:06:56.959 } 00:06:56.959 ] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "scsi", 00:06:56.959 "config": null 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "scheduler", 00:06:56.959 "config": [ 00:06:56.959 { 00:06:56.959 "method": "framework_set_scheduler", 00:06:56.959 "params": { 00:06:56.959 "name": "static" 00:06:56.959 } 00:06:56.959 } 00:06:56.959 ] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "vhost_scsi", 00:06:56.959 "config": [] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "vhost_blk", 00:06:56.959 "config": [] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "ublk", 00:06:56.959 "config": [] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "nbd", 00:06:56.959 "config": [] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "nvmf", 00:06:56.959 "config": [ 00:06:56.959 { 00:06:56.959 "method": "nvmf_set_config", 00:06:56.959 "params": { 00:06:56.959 "discovery_filter": "match_any", 00:06:56.959 "admin_cmd_passthru": { 00:06:56.959 "identify_ctrlr": false 00:06:56.959 } 00:06:56.959 } 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "method": "nvmf_set_max_subsystems", 00:06:56.959 "params": { 00:06:56.959 "max_subsystems": 1024 00:06:56.959 } 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "method": "nvmf_set_crdt", 00:06:56.959 "params": { 00:06:56.959 "crdt1": 0, 00:06:56.959 "crdt2": 0, 00:06:56.959 "crdt3": 0 00:06:56.959 } 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "method": "nvmf_create_transport", 00:06:56.959 "params": { 00:06:56.959 "trtype": "TCP", 00:06:56.959 "max_queue_depth": 128, 00:06:56.959 "max_io_qpairs_per_ctrlr": 127, 00:06:56.959 "in_capsule_data_size": 4096, 00:06:56.959 "max_io_size": 131072, 00:06:56.959 "io_unit_size": 131072, 00:06:56.959 "max_aq_depth": 128, 00:06:56.959 "num_shared_buffers": 511, 00:06:56.959 "buf_cache_size": 4294967295, 00:06:56.959 "dif_insert_or_strip": false, 00:06:56.959 "zcopy": false, 00:06:56.959 "c2h_success": true, 00:06:56.959 "sock_priority": 0, 00:06:56.959 "abort_timeout_sec": 1, 00:06:56.959 "ack_timeout": 0, 00:06:56.959 "data_wr_pool_size": 0 00:06:56.959 } 00:06:56.959 } 00:06:56.959 ] 00:06:56.959 }, 00:06:56.959 { 00:06:56.959 "subsystem": "iscsi", 00:06:56.959 "config": [ 00:06:56.959 { 00:06:56.959 "method": "iscsi_set_options", 00:06:56.959 "params": { 00:06:56.959 "node_base": "iqn.2016-06.io.spdk", 00:06:56.959 "max_sessions": 128, 00:06:56.959 "max_connections_per_session": 2, 00:06:56.959 "max_queue_depth": 64, 00:06:56.959 "default_time2wait": 2, 00:06:56.959 "default_time2retain": 20, 00:06:56.959 "first_burst_length": 8192, 00:06:56.959 "immediate_data": true, 00:06:56.959 "allow_duplicated_isid": false, 00:06:56.959 "error_recovery_level": 0, 00:06:56.959 "nop_timeout": 60, 00:06:56.959 "nop_in_interval": 30, 00:06:56.959 "disable_chap": false, 00:06:56.959 "require_chap": false, 00:06:56.959 "mutual_chap": false, 00:06:56.959 "chap_group": 0, 00:06:56.959 "max_large_datain_per_connection": 64, 00:06:56.959 "max_r2t_per_connection": 4, 00:06:56.959 "pdu_pool_size": 36864, 00:06:56.959 "immediate_data_pool_size": 16384, 00:06:56.959 "data_out_pool_size": 2048 00:06:56.959 } 00:06:56.959 } 00:06:56.959 ] 00:06:56.959 } 00:06:56.959 ] 00:06:56.959 } 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74521 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74521 ']' 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74521 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74521 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.959 killing process with pid 74521 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74521' 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74521 00:06:56.959 01:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74521 00:06:57.527 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74550 00:06:57.527 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:57.527 01:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74550 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74550 ']' 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74550 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74550 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.796 killing process with pid 74550 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74550' 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74550 00:07:02.796 01:16:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74550 00:07:03.056 01:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.056 01:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.056 00:07:03.056 real 0m7.333s 00:07:03.056 user 0m6.499s 00:07:03.056 sys 0m1.104s 00:07:03.056 01:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.056 01:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.056 ************************************ 00:07:03.056 END TEST skip_rpc_with_json 00:07:03.056 ************************************ 00:07:03.316 01:16:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.316 01:16:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.316 01:16:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.316 01:16:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.316 ************************************ 00:07:03.316 START TEST skip_rpc_with_delay 00:07:03.316 ************************************ 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.316 [2024-07-21 01:16:48.513947] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.316 [2024-07-21 01:16:48.514102] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.316 00:07:03.316 real 0m0.183s 00:07:03.316 user 0m0.082s 00:07:03.316 sys 0m0.099s 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.316 01:16:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.316 ************************************ 00:07:03.316 END TEST skip_rpc_with_delay 00:07:03.316 ************************************ 00:07:03.575 01:16:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.575 01:16:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.575 01:16:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.575 01:16:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.575 01:16:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.575 01:16:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.575 ************************************ 00:07:03.575 START TEST exit_on_failed_rpc_init 00:07:03.575 ************************************ 00:07:03.575 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:07:03.575 01:16:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74661 00:07:03.575 01:16:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.575 01:16:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74661 00:07:03.575 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74661 ']' 00:07:03.576 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.576 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.576 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.576 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.576 01:16:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.576 [2024-07-21 01:16:48.774681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.576 [2024-07-21 01:16:48.774834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74661 ] 00:07:03.835 [2024-07-21 01:16:48.945574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.835 [2024-07-21 01:16:49.008889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:04.402 01:16:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.402 [2024-07-21 01:16:49.685030] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.402 [2024-07-21 01:16:49.685190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74681 ] 00:07:04.662 [2024-07-21 01:16:49.857199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.662 [2024-07-21 01:16:49.904575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.662 [2024-07-21 01:16:49.904671] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.662 [2024-07-21 01:16:49.904704] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.662 [2024-07-21 01:16:49.904721] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74661 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74661 ']' 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74661 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74661 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.922 killing process with pid 74661 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74661' 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74661 00:07:04.922 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74661 00:07:05.573 00:07:05.573 real 0m1.977s 00:07:05.573 user 0m1.953s 00:07:05.573 sys 0m0.692s 00:07:05.573 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.573 ************************************ 00:07:05.573 END TEST exit_on_failed_rpc_init 00:07:05.573 ************************************ 00:07:05.573 01:16:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.573 01:16:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:05.573 00:07:05.573 real 0m15.580s 00:07:05.573 user 0m13.743s 00:07:05.573 sys 0m2.695s 00:07:05.573 01:16:50 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.573 01:16:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.573 ************************************ 00:07:05.573 END TEST skip_rpc 00:07:05.573 ************************************ 00:07:05.573 01:16:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:05.573 01:16:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.573 01:16:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.573 01:16:50 -- common/autotest_common.sh@10 -- # set +x 00:07:05.573 ************************************ 00:07:05.573 START TEST rpc_client 00:07:05.573 ************************************ 00:07:05.574 01:16:50 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:05.834 * Looking for test storage... 00:07:05.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:05.834 01:16:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:05.834 OK 00:07:05.834 01:16:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:05.834 00:07:05.834 real 0m0.216s 00:07:05.834 user 0m0.085s 00:07:05.834 sys 0m0.137s 00:07:05.834 01:16:51 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.834 01:16:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:05.834 ************************************ 00:07:05.834 END TEST rpc_client 00:07:05.834 ************************************ 00:07:05.834 01:16:51 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:05.834 01:16:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.834 01:16:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.834 01:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:05.834 ************************************ 00:07:05.834 START TEST json_config 00:07:05.834 ************************************ 00:07:05.834 01:16:51 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.094 01:16:51 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.094 01:16:51 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.094 01:16:51 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.094 01:16:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config -- paths/export.sh@5 -- # export PATH 00:07:06.094 01:16:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@47 -- # : 0 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.094 01:16:51 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:06.094 WARNING: No tests are enabled so not running JSON configuration tests 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:06.094 01:16:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:06.094 00:07:06.094 real 0m0.131s 00:07:06.094 user 0m0.058s 00:07:06.094 sys 0m0.074s 00:07:06.094 01:16:51 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.094 01:16:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 ************************************ 00:07:06.094 END TEST json_config 00:07:06.094 ************************************ 00:07:06.094 01:16:51 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:06.094 01:16:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.094 01:16:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.094 01:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 ************************************ 00:07:06.094 START TEST json_config_extra_key 00:07:06.094 ************************************ 00:07:06.094 01:16:51 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:06.094 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=58c07e6b-7f02-4639-b8ee-ffc2403f8ec7 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.094 01:16:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.094 01:16:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.094 01:16:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.094 01:16:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:06.094 01:16:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.094 01:16:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:06.095 INFO: launching applications... 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:06.095 01:16:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74845 00:07:06.095 Waiting for target to run... 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74845 /var/tmp/spdk_tgt.sock 00:07:06.095 01:16:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74845 ']' 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:06.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.095 01:16:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:06.355 [2024-07-21 01:16:51.484403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.355 [2024-07-21 01:16:51.484534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74845 ] 00:07:06.614 [2024-07-21 01:16:51.875627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.614 [2024-07-21 01:16:51.914468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.182 00:07:07.182 INFO: shutting down applications... 00:07:07.182 01:16:52 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.182 01:16:52 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:07.182 01:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:07.182 01:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74845 ]] 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74845 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74845 00:07:07.182 01:16:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:07.752 01:16:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:07.752 01:16:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.752 01:16:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74845 00:07:07.752 01:16:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.011 SPDK target shutdown done 00:07:08.011 Success 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74845 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:08.011 01:16:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:08.011 01:16:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:08.011 00:07:08.011 real 0m2.011s 00:07:08.011 user 0m1.389s 00:07:08.011 sys 0m0.496s 00:07:08.011 01:16:53 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.011 01:16:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:08.011 ************************************ 00:07:08.011 END TEST json_config_extra_key 00:07:08.011 ************************************ 00:07:08.269 01:16:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:08.269 01:16:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.270 01:16:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.270 01:16:53 -- common/autotest_common.sh@10 -- # set +x 00:07:08.270 ************************************ 00:07:08.270 START TEST alias_rpc 00:07:08.270 ************************************ 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:08.270 * Looking for test storage... 00:07:08.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:08.270 01:16:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.270 01:16:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74918 00:07:08.270 01:16:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:08.270 01:16:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74918 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74918 ']' 00:07:08.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.270 01:16:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.270 [2024-07-21 01:16:53.572155] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:08.270 [2024-07-21 01:16:53.572283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:07:08.529 [2024-07-21 01:16:53.742126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.529 [2024-07-21 01:16:53.808269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.095 01:16:54 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.095 01:16:54 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:09.095 01:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:09.353 01:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74918 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74918 ']' 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74918 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74918 00:07:09.353 killing process with pid 74918 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74918' 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@965 -- # kill 74918 00:07:09.353 01:16:54 alias_rpc -- common/autotest_common.sh@970 -- # wait 74918 00:07:09.920 ************************************ 00:07:09.920 END TEST alias_rpc 00:07:09.920 ************************************ 00:07:09.920 00:07:09.920 real 0m1.848s 00:07:09.920 user 0m1.695s 00:07:09.920 sys 0m0.650s 00:07:09.920 01:16:55 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.920 01:16:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.178 01:16:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:10.178 01:16:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:10.178 01:16:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.178 01:16:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.178 01:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:10.178 ************************************ 00:07:10.178 START TEST spdkcli_tcp 00:07:10.178 ************************************ 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:10.178 * Looking for test storage... 00:07:10.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74990 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:10.178 01:16:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74990 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 74990 ']' 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:10.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:10.178 01:16:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.436 [2024-07-21 01:16:55.508391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.436 [2024-07-21 01:16:55.508553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74990 ] 00:07:10.436 [2024-07-21 01:16:55.681065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.695 [2024-07-21 01:16:55.748589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.695 [2024-07-21 01:16:55.748697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=75007 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:11.263 [ 00:07:11.263 "bdev_malloc_delete", 00:07:11.263 "bdev_malloc_create", 00:07:11.263 "bdev_null_resize", 00:07:11.263 "bdev_null_delete", 00:07:11.263 "bdev_null_create", 00:07:11.263 "bdev_nvme_cuse_unregister", 00:07:11.263 "bdev_nvme_cuse_register", 00:07:11.263 "bdev_opal_new_user", 00:07:11.263 "bdev_opal_set_lock_state", 00:07:11.263 "bdev_opal_delete", 00:07:11.263 "bdev_opal_get_info", 00:07:11.263 "bdev_opal_create", 00:07:11.263 "bdev_nvme_opal_revert", 00:07:11.263 "bdev_nvme_opal_init", 00:07:11.263 "bdev_nvme_send_cmd", 00:07:11.263 "bdev_nvme_get_path_iostat", 00:07:11.263 "bdev_nvme_get_mdns_discovery_info", 00:07:11.263 "bdev_nvme_stop_mdns_discovery", 00:07:11.263 "bdev_nvme_start_mdns_discovery", 00:07:11.263 "bdev_nvme_set_multipath_policy", 00:07:11.263 "bdev_nvme_set_preferred_path", 00:07:11.263 "bdev_nvme_get_io_paths", 00:07:11.263 "bdev_nvme_remove_error_injection", 00:07:11.263 "bdev_nvme_add_error_injection", 00:07:11.263 "bdev_nvme_get_discovery_info", 00:07:11.263 "bdev_nvme_stop_discovery", 00:07:11.263 "bdev_nvme_start_discovery", 00:07:11.263 "bdev_nvme_get_controller_health_info", 00:07:11.263 "bdev_nvme_disable_controller", 00:07:11.263 "bdev_nvme_enable_controller", 00:07:11.263 "bdev_nvme_reset_controller", 00:07:11.263 "bdev_nvme_get_transport_statistics", 00:07:11.263 "bdev_nvme_apply_firmware", 00:07:11.263 "bdev_nvme_detach_controller", 00:07:11.263 "bdev_nvme_get_controllers", 00:07:11.263 "bdev_nvme_attach_controller", 00:07:11.263 "bdev_nvme_set_hotplug", 00:07:11.263 "bdev_nvme_set_options", 00:07:11.263 "bdev_passthru_delete", 00:07:11.263 "bdev_passthru_create", 00:07:11.263 "bdev_lvol_set_parent_bdev", 00:07:11.263 "bdev_lvol_set_parent", 00:07:11.263 "bdev_lvol_check_shallow_copy", 00:07:11.263 "bdev_lvol_start_shallow_copy", 00:07:11.263 "bdev_lvol_grow_lvstore", 00:07:11.263 "bdev_lvol_get_lvols", 00:07:11.263 "bdev_lvol_get_lvstores", 00:07:11.263 "bdev_lvol_delete", 00:07:11.263 "bdev_lvol_set_read_only", 00:07:11.263 "bdev_lvol_resize", 00:07:11.263 "bdev_lvol_decouple_parent", 00:07:11.263 "bdev_lvol_inflate", 00:07:11.263 "bdev_lvol_rename", 00:07:11.263 "bdev_lvol_clone_bdev", 00:07:11.263 "bdev_lvol_clone", 00:07:11.263 "bdev_lvol_snapshot", 00:07:11.263 "bdev_lvol_create", 00:07:11.263 "bdev_lvol_delete_lvstore", 00:07:11.263 "bdev_lvol_rename_lvstore", 00:07:11.263 "bdev_lvol_create_lvstore", 00:07:11.263 "bdev_raid_set_options", 00:07:11.263 "bdev_raid_remove_base_bdev", 00:07:11.263 "bdev_raid_add_base_bdev", 00:07:11.263 "bdev_raid_delete", 00:07:11.263 "bdev_raid_create", 00:07:11.263 "bdev_raid_get_bdevs", 00:07:11.263 "bdev_error_inject_error", 00:07:11.263 "bdev_error_delete", 00:07:11.263 "bdev_error_create", 00:07:11.263 "bdev_split_delete", 00:07:11.263 "bdev_split_create", 00:07:11.263 "bdev_delay_delete", 00:07:11.263 "bdev_delay_create", 00:07:11.263 "bdev_delay_update_latency", 00:07:11.263 "bdev_zone_block_delete", 00:07:11.263 "bdev_zone_block_create", 00:07:11.263 "blobfs_create", 00:07:11.263 "blobfs_detect", 00:07:11.263 "blobfs_set_cache_size", 00:07:11.263 "bdev_xnvme_delete", 00:07:11.263 "bdev_xnvme_create", 00:07:11.263 "bdev_aio_delete", 00:07:11.263 "bdev_aio_rescan", 00:07:11.263 "bdev_aio_create", 00:07:11.263 "bdev_ftl_set_property", 00:07:11.263 "bdev_ftl_get_properties", 00:07:11.263 "bdev_ftl_get_stats", 00:07:11.263 "bdev_ftl_unmap", 00:07:11.263 "bdev_ftl_unload", 00:07:11.263 "bdev_ftl_delete", 00:07:11.263 "bdev_ftl_load", 00:07:11.263 "bdev_ftl_create", 00:07:11.263 "bdev_virtio_attach_controller", 00:07:11.263 "bdev_virtio_scsi_get_devices", 00:07:11.263 "bdev_virtio_detach_controller", 00:07:11.263 "bdev_virtio_blk_set_hotplug", 00:07:11.263 "bdev_iscsi_delete", 00:07:11.263 "bdev_iscsi_create", 00:07:11.263 "bdev_iscsi_set_options", 00:07:11.263 "accel_error_inject_error", 00:07:11.263 "ioat_scan_accel_module", 00:07:11.263 "dsa_scan_accel_module", 00:07:11.263 "iaa_scan_accel_module", 00:07:11.263 "keyring_file_remove_key", 00:07:11.263 "keyring_file_add_key", 00:07:11.263 "keyring_linux_set_options", 00:07:11.263 "iscsi_get_histogram", 00:07:11.263 "iscsi_enable_histogram", 00:07:11.263 "iscsi_set_options", 00:07:11.263 "iscsi_get_auth_groups", 00:07:11.263 "iscsi_auth_group_remove_secret", 00:07:11.263 "iscsi_auth_group_add_secret", 00:07:11.263 "iscsi_delete_auth_group", 00:07:11.263 "iscsi_create_auth_group", 00:07:11.263 "iscsi_set_discovery_auth", 00:07:11.263 "iscsi_get_options", 00:07:11.263 "iscsi_target_node_request_logout", 00:07:11.263 "iscsi_target_node_set_redirect", 00:07:11.263 "iscsi_target_node_set_auth", 00:07:11.263 "iscsi_target_node_add_lun", 00:07:11.263 "iscsi_get_stats", 00:07:11.263 "iscsi_get_connections", 00:07:11.263 "iscsi_portal_group_set_auth", 00:07:11.263 "iscsi_start_portal_group", 00:07:11.263 "iscsi_delete_portal_group", 00:07:11.263 "iscsi_create_portal_group", 00:07:11.263 "iscsi_get_portal_groups", 00:07:11.263 "iscsi_delete_target_node", 00:07:11.263 "iscsi_target_node_remove_pg_ig_maps", 00:07:11.263 "iscsi_target_node_add_pg_ig_maps", 00:07:11.263 "iscsi_create_target_node", 00:07:11.263 "iscsi_get_target_nodes", 00:07:11.263 "iscsi_delete_initiator_group", 00:07:11.263 "iscsi_initiator_group_remove_initiators", 00:07:11.263 "iscsi_initiator_group_add_initiators", 00:07:11.263 "iscsi_create_initiator_group", 00:07:11.263 "iscsi_get_initiator_groups", 00:07:11.263 "nvmf_set_crdt", 00:07:11.263 "nvmf_set_config", 00:07:11.263 "nvmf_set_max_subsystems", 00:07:11.263 "nvmf_stop_mdns_prr", 00:07:11.263 "nvmf_publish_mdns_prr", 00:07:11.263 "nvmf_subsystem_get_listeners", 00:07:11.263 "nvmf_subsystem_get_qpairs", 00:07:11.263 "nvmf_subsystem_get_controllers", 00:07:11.263 "nvmf_get_stats", 00:07:11.263 "nvmf_get_transports", 00:07:11.263 "nvmf_create_transport", 00:07:11.263 "nvmf_get_targets", 00:07:11.263 "nvmf_delete_target", 00:07:11.263 "nvmf_create_target", 00:07:11.263 "nvmf_subsystem_allow_any_host", 00:07:11.263 "nvmf_subsystem_remove_host", 00:07:11.263 "nvmf_subsystem_add_host", 00:07:11.263 "nvmf_ns_remove_host", 00:07:11.263 "nvmf_ns_add_host", 00:07:11.263 "nvmf_subsystem_remove_ns", 00:07:11.263 "nvmf_subsystem_add_ns", 00:07:11.263 "nvmf_subsystem_listener_set_ana_state", 00:07:11.263 "nvmf_discovery_get_referrals", 00:07:11.263 "nvmf_discovery_remove_referral", 00:07:11.263 "nvmf_discovery_add_referral", 00:07:11.263 "nvmf_subsystem_remove_listener", 00:07:11.263 "nvmf_subsystem_add_listener", 00:07:11.263 "nvmf_delete_subsystem", 00:07:11.263 "nvmf_create_subsystem", 00:07:11.263 "nvmf_get_subsystems", 00:07:11.263 "env_dpdk_get_mem_stats", 00:07:11.263 "nbd_get_disks", 00:07:11.263 "nbd_stop_disk", 00:07:11.263 "nbd_start_disk", 00:07:11.263 "ublk_recover_disk", 00:07:11.263 "ublk_get_disks", 00:07:11.263 "ublk_stop_disk", 00:07:11.263 "ublk_start_disk", 00:07:11.263 "ublk_destroy_target", 00:07:11.263 "ublk_create_target", 00:07:11.263 "virtio_blk_create_transport", 00:07:11.263 "virtio_blk_get_transports", 00:07:11.263 "vhost_controller_set_coalescing", 00:07:11.263 "vhost_get_controllers", 00:07:11.263 "vhost_delete_controller", 00:07:11.263 "vhost_create_blk_controller", 00:07:11.263 "vhost_scsi_controller_remove_target", 00:07:11.263 "vhost_scsi_controller_add_target", 00:07:11.263 "vhost_start_scsi_controller", 00:07:11.263 "vhost_create_scsi_controller", 00:07:11.263 "thread_set_cpumask", 00:07:11.263 "framework_get_scheduler", 00:07:11.263 "framework_set_scheduler", 00:07:11.263 "framework_get_reactors", 00:07:11.263 "thread_get_io_channels", 00:07:11.263 "thread_get_pollers", 00:07:11.263 "thread_get_stats", 00:07:11.263 "framework_monitor_context_switch", 00:07:11.263 "spdk_kill_instance", 00:07:11.263 "log_enable_timestamps", 00:07:11.263 "log_get_flags", 00:07:11.263 "log_clear_flag", 00:07:11.263 "log_set_flag", 00:07:11.263 "log_get_level", 00:07:11.263 "log_set_level", 00:07:11.263 "log_get_print_level", 00:07:11.263 "log_set_print_level", 00:07:11.263 "framework_enable_cpumask_locks", 00:07:11.263 "framework_disable_cpumask_locks", 00:07:11.263 "framework_wait_init", 00:07:11.263 "framework_start_init", 00:07:11.263 "scsi_get_devices", 00:07:11.263 "bdev_get_histogram", 00:07:11.263 "bdev_enable_histogram", 00:07:11.263 "bdev_set_qos_limit", 00:07:11.263 "bdev_set_qd_sampling_period", 00:07:11.263 "bdev_get_bdevs", 00:07:11.263 "bdev_reset_iostat", 00:07:11.263 "bdev_get_iostat", 00:07:11.263 "bdev_examine", 00:07:11.263 "bdev_wait_for_examine", 00:07:11.263 "bdev_set_options", 00:07:11.263 "notify_get_notifications", 00:07:11.263 "notify_get_types", 00:07:11.263 "accel_get_stats", 00:07:11.263 "accel_set_options", 00:07:11.263 "accel_set_driver", 00:07:11.263 "accel_crypto_key_destroy", 00:07:11.263 "accel_crypto_keys_get", 00:07:11.263 "accel_crypto_key_create", 00:07:11.263 "accel_assign_opc", 00:07:11.263 "accel_get_module_info", 00:07:11.263 "accel_get_opc_assignments", 00:07:11.263 "vmd_rescan", 00:07:11.263 "vmd_remove_device", 00:07:11.263 "vmd_enable", 00:07:11.263 "sock_get_default_impl", 00:07:11.263 "sock_set_default_impl", 00:07:11.263 "sock_impl_set_options", 00:07:11.263 "sock_impl_get_options", 00:07:11.263 "iobuf_get_stats", 00:07:11.263 "iobuf_set_options", 00:07:11.263 "framework_get_pci_devices", 00:07:11.263 "framework_get_config", 00:07:11.263 "framework_get_subsystems", 00:07:11.263 "trace_get_info", 00:07:11.263 "trace_get_tpoint_group_mask", 00:07:11.263 "trace_disable_tpoint_group", 00:07:11.263 "trace_enable_tpoint_group", 00:07:11.263 "trace_clear_tpoint_mask", 00:07:11.263 "trace_set_tpoint_mask", 00:07:11.263 "keyring_get_keys", 00:07:11.263 "spdk_get_version", 00:07:11.263 "rpc_get_methods" 00:07:11.263 ] 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:11.263 01:16:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74990 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 74990 ']' 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 74990 00:07:11.263 01:16:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74990 00:07:11.521 killing process with pid 74990 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74990' 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 74990 00:07:11.521 01:16:56 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 74990 00:07:12.088 ************************************ 00:07:12.088 END TEST spdkcli_tcp 00:07:12.088 ************************************ 00:07:12.088 00:07:12.088 real 0m1.929s 00:07:12.088 user 0m3.074s 00:07:12.088 sys 0m0.702s 00:07:12.088 01:16:57 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.088 01:16:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.088 01:16:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.088 01:16:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.088 01:16:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.088 01:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.088 ************************************ 00:07:12.088 START TEST dpdk_mem_utility 00:07:12.088 ************************************ 00:07:12.088 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:12.088 * Looking for test storage... 00:07:12.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:12.347 01:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:12.347 01:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75082 00:07:12.347 01:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.347 01:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75082 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75082 ']' 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.347 01:16:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:12.347 [2024-07-21 01:16:57.499622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.347 [2024-07-21 01:16:57.499755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75082 ] 00:07:12.606 [2024-07-21 01:16:57.670063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.606 [2024-07-21 01:16:57.741993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.176 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.176 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:13.176 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:13.176 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:13.176 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.176 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:13.176 { 00:07:13.176 "filename": "/tmp/spdk_mem_dump.txt" 00:07:13.176 } 00:07:13.176 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.176 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:13.176 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:13.176 1 heaps totaling size 814.000000 MiB 00:07:13.176 size: 814.000000 MiB heap id: 0 00:07:13.176 end heaps---------- 00:07:13.176 8 mempools totaling size 598.116089 MiB 00:07:13.176 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:13.176 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:13.176 size: 84.521057 MiB name: bdev_io_75082 00:07:13.176 size: 51.011292 MiB name: evtpool_75082 00:07:13.176 size: 50.003479 MiB name: msgpool_75082 00:07:13.176 size: 21.763794 MiB name: PDU_Pool 00:07:13.176 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:13.176 size: 0.026123 MiB name: Session_Pool 00:07:13.176 end mempools------- 00:07:13.176 6 memzones totaling size 4.142822 MiB 00:07:13.176 size: 1.000366 MiB name: RG_ring_0_75082 00:07:13.176 size: 1.000366 MiB name: RG_ring_1_75082 00:07:13.176 size: 1.000366 MiB name: RG_ring_4_75082 00:07:13.176 size: 1.000366 MiB name: RG_ring_5_75082 00:07:13.176 size: 0.125366 MiB name: RG_ring_2_75082 00:07:13.176 size: 0.015991 MiB name: RG_ring_3_75082 00:07:13.176 end memzones------- 00:07:13.176 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:13.176 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:07:13.176 list of free elements. size: 12.471558 MiB 00:07:13.176 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:13.176 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:13.176 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:13.176 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:13.176 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:13.176 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:13.176 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:13.176 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:13.176 element at address: 0x200000200000 with size: 0.833191 MiB 00:07:13.176 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:07:13.176 element at address: 0x20000b200000 with size: 0.489807 MiB 00:07:13.176 element at address: 0x200000800000 with size: 0.486145 MiB 00:07:13.176 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:13.176 element at address: 0x200027e00000 with size: 0.395752 MiB 00:07:13.176 element at address: 0x200003a00000 with size: 0.347839 MiB 00:07:13.176 list of standard malloc elements. size: 199.265869 MiB 00:07:13.176 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:13.176 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:13.176 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:13.176 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:13.176 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:13.176 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:13.176 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:13.176 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:13.176 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:13.176 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087c740 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087c800 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:13.176 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:13.177 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e65500 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:13.177 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:13.178 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:13.178 list of memzone associated elements. size: 602.262573 MiB 00:07:13.178 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:13.178 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:13.178 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:13.178 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:13.178 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:13.178 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75082_0 00:07:13.178 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:13.178 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75082_0 00:07:13.178 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:13.178 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75082_0 00:07:13.178 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:13.178 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:13.178 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:13.178 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:13.178 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:13.178 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75082 00:07:13.178 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:13.178 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75082 00:07:13.178 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:13.178 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75082 00:07:13.178 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:13.178 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:13.178 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:13.178 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:13.178 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:13.178 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:13.178 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:13.178 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:13.178 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:13.178 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75082 00:07:13.178 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:13.178 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75082 00:07:13.178 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:13.178 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75082 00:07:13.178 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:13.178 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75082 00:07:13.178 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:13.178 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75082 00:07:13.178 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:13.178 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:13.178 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:13.178 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:13.178 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:13.178 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:13.178 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:13.178 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75082 00:07:13.178 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:13.178 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:13.178 element at address: 0x200027e65680 with size: 0.023743 MiB 00:07:13.178 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:13.178 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:13.178 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75082 00:07:13.178 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:07:13.178 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:13.178 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:13.178 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75082 00:07:13.178 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:13.178 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75082 00:07:13.178 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:07:13.178 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:13.178 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:13.178 01:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75082 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75082 ']' 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75082 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75082 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75082' 00:07:13.178 killing process with pid 75082 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75082 00:07:13.178 01:16:58 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75082 00:07:13.746 00:07:13.746 real 0m1.768s 00:07:13.746 user 0m1.547s 00:07:13.746 sys 0m0.645s 00:07:13.746 ************************************ 00:07:13.746 END TEST dpdk_mem_utility 00:07:13.746 ************************************ 00:07:13.746 01:16:59 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.746 01:16:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 01:16:59 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:14.005 01:16:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.005 01:16:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.005 01:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 ************************************ 00:07:14.005 START TEST event 00:07:14.005 ************************************ 00:07:14.005 01:16:59 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:14.005 * Looking for test storage... 00:07:14.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:14.005 01:16:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:14.005 01:16:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:14.005 01:16:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.005 01:16:59 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:14.005 01:16:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.005 01:16:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.005 ************************************ 00:07:14.005 START TEST event_perf 00:07:14.005 ************************************ 00:07:14.005 01:16:59 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:14.005 Running I/O for 1 seconds...[2024-07-21 01:16:59.299779] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.005 [2024-07-21 01:16:59.300110] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:07:14.264 [2024-07-21 01:16:59.470721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.264 [2024-07-21 01:16:59.547963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.264 Running I/O for 1 seconds...[2024-07-21 01:16:59.548170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.264 [2024-07-21 01:16:59.548173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.264 [2024-07-21 01:16:59.548296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.642 00:07:15.642 lcore 0: 104565 00:07:15.642 lcore 1: 104566 00:07:15.642 lcore 2: 104566 00:07:15.642 lcore 3: 104565 00:07:15.642 done. 00:07:15.642 00:07:15.642 real 0m1.433s 00:07:15.642 user 0m4.162s 00:07:15.642 sys 0m0.147s 00:07:15.642 01:17:00 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.642 01:17:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 END TEST event_perf 00:07:15.642 ************************************ 00:07:15.642 01:17:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.642 01:17:00 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:15.642 01:17:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.642 01:17:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 START TEST event_reactor 00:07:15.642 ************************************ 00:07:15.642 01:17:00 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:15.642 [2024-07-21 01:17:00.810321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.642 [2024-07-21 01:17:00.810464] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75199 ] 00:07:15.901 [2024-07-21 01:17:00.981961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.901 [2024-07-21 01:17:01.056775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.277 test_start 00:07:17.277 oneshot 00:07:17.277 tick 100 00:07:17.277 tick 100 00:07:17.277 tick 250 00:07:17.277 tick 100 00:07:17.277 tick 100 00:07:17.277 tick 100 00:07:17.277 tick 250 00:07:17.277 tick 500 00:07:17.277 tick 100 00:07:17.277 tick 100 00:07:17.277 tick 250 00:07:17.277 tick 100 00:07:17.277 tick 100 00:07:17.277 test_end 00:07:17.277 00:07:17.277 real 0m1.427s 00:07:17.277 user 0m1.177s 00:07:17.277 sys 0m0.141s 00:07:17.277 01:17:02 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.277 ************************************ 00:07:17.277 END TEST event_reactor 00:07:17.277 ************************************ 00:07:17.277 01:17:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:17.277 01:17:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.277 01:17:02 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:17.277 01:17:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.277 01:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.277 ************************************ 00:07:17.277 START TEST event_reactor_perf 00:07:17.277 ************************************ 00:07:17.277 01:17:02 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:17.277 [2024-07-21 01:17:02.305466] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.277 [2024-07-21 01:17:02.305642] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75236 ] 00:07:17.277 [2024-07-21 01:17:02.474018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.277 [2024-07-21 01:17:02.546821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.652 test_start 00:07:18.652 test_end 00:07:18.652 Performance: 379348 events per second 00:07:18.652 00:07:18.652 real 0m1.416s 00:07:18.652 user 0m1.176s 00:07:18.652 sys 0m0.133s 00:07:18.652 01:17:03 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.652 01:17:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 END TEST event_reactor_perf 00:07:18.652 ************************************ 00:07:18.652 01:17:03 event -- event/event.sh@49 -- # uname -s 00:07:18.652 01:17:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:18.652 01:17:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.652 01:17:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:18.652 01:17:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.652 01:17:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.652 ************************************ 00:07:18.652 START TEST event_scheduler 00:07:18.652 ************************************ 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:18.652 * Looking for test storage... 00:07:18.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:18.652 01:17:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:18.652 01:17:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75297 00:07:18.652 01:17:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.652 01:17:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:18.652 01:17:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75297 00:07:18.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 75297 ']' 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.652 01:17:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.910 [2024-07-21 01:17:03.972403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.910 [2024-07-21 01:17:03.972546] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75297 ] 00:07:18.910 [2024-07-21 01:17:04.132259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.910 [2024-07-21 01:17:04.182880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.910 [2024-07-21 01:17:04.183057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.910 [2024-07-21 01:17:04.183256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.910 [2024-07-21 01:17:04.183119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.477 01:17:04 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.477 01:17:04 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:19.477 01:17:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:19.477 01:17:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.477 01:17:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.477 POWER: Env isn't set yet! 00:07:19.477 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:19.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.477 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.477 POWER: Attempting to initialise PSTAT power management... 00:07:19.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.477 POWER: Cannot set governor of lcore 0 to performance 00:07:19.477 POWER: Attempting to initialise AMD PSTATE power management... 00:07:19.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.477 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.477 POWER: Attempting to initialise CPPC power management... 00:07:19.477 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:19.477 POWER: Cannot set governor of lcore 0 to userspace 00:07:19.477 POWER: Attempting to initialise VM power management... 00:07:19.477 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:19.477 POWER: Unable to set Power Management Environment for lcore 0 00:07:19.477 [2024-07-21 01:17:04.784293] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:19.477 [2024-07-21 01:17:04.784322] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:19.477 [2024-07-21 01:17:04.784339] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:19.477 [2024-07-21 01:17:04.784370] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:19.477 [2024-07-21 01:17:04.784400] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:19.477 [2024-07-21 01:17:04.784411] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 [2024-07-21 01:17:04.905275] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 ************************************ 00:07:19.736 START TEST scheduler_create_thread 00:07:19.736 ************************************ 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 2 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 3 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 4 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 5 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 6 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 7 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 8 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.736 9 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.736 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.303 10 00:07:20.303 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.303 01:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:20.303 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.303 01:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.680 01:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.680 01:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:21.680 01:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:21.680 01:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.680 01:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.617 01:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.617 01:17:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:22.617 01:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.617 01:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.184 01:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.184 01:17:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:23.184 01:17:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:23.184 01:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.184 01:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.118 ************************************ 00:07:24.118 END TEST scheduler_create_thread 00:07:24.118 ************************************ 00:07:24.118 01:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.118 00:07:24.118 real 0m4.214s 00:07:24.118 user 0m0.028s 00:07:24.118 sys 0m0.008s 00:07:24.118 01:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.118 01:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.118 01:17:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:24.118 01:17:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75297 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 75297 ']' 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 75297 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75297 00:07:24.118 killing process with pid 75297 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75297' 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 75297 00:07:24.118 01:17:09 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 75297 00:07:24.376 [2024-07-21 01:17:09.512702] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:24.634 00:07:24.634 real 0m6.168s 00:07:24.634 user 0m13.744s 00:07:24.634 sys 0m0.524s 00:07:24.634 01:17:09 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.634 01:17:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:24.634 ************************************ 00:07:24.634 END TEST event_scheduler 00:07:24.634 ************************************ 00:07:24.900 01:17:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:24.900 01:17:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:24.900 01:17:09 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.900 01:17:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.900 01:17:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.900 ************************************ 00:07:24.900 START TEST app_repeat 00:07:24.900 ************************************ 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75413 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:24.900 Process app_repeat pid: 75413 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75413' 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.900 spdk_app_start Round 0 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:24.900 01:17:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75413 /var/tmp/spdk-nbd.sock 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75413 ']' 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.900 01:17:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.900 [2024-07-21 01:17:10.072714] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:24.900 [2024-07-21 01:17:10.073038] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75413 ] 00:07:25.173 [2024-07-21 01:17:10.245505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.173 [2024-07-21 01:17:10.320651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.173 [2024-07-21 01:17:10.320756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.739 01:17:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.739 01:17:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:25.739 01:17:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.997 Malloc0 00:07:25.997 01:17:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.256 Malloc1 00:07:26.256 01:17:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.256 /dev/nbd0 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.256 1+0 records in 00:07:26.256 1+0 records out 00:07:26.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469148 s, 8.7 MB/s 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:26.256 01:17:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.256 01:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.514 /dev/nbd1 00:07:26.514 01:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.514 01:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.514 01:17:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:26.514 01:17:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:26.514 01:17:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.515 1+0 records in 00:07:26.515 1+0 records out 00:07:26.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392796 s, 10.4 MB/s 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:26.515 01:17:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:26.515 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.515 01:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.515 01:17:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.515 01:17:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.515 01:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.773 01:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.773 { 00:07:26.773 "nbd_device": "/dev/nbd0", 00:07:26.773 "bdev_name": "Malloc0" 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "nbd_device": "/dev/nbd1", 00:07:26.773 "bdev_name": "Malloc1" 00:07:26.773 } 00:07:26.773 ]' 00:07:26.773 01:17:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.773 { 00:07:26.773 "nbd_device": "/dev/nbd0", 00:07:26.773 "bdev_name": "Malloc0" 00:07:26.773 }, 00:07:26.773 { 00:07:26.773 "nbd_device": "/dev/nbd1", 00:07:26.773 "bdev_name": "Malloc1" 00:07:26.773 } 00:07:26.773 ]' 00:07:26.773 01:17:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.773 /dev/nbd1' 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.773 /dev/nbd1' 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:26.773 256+0 records in 00:07:26.773 256+0 records out 00:07:26.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132367 s, 79.2 MB/s 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.773 256+0 records in 00:07:26.773 256+0 records out 00:07:26.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271229 s, 38.7 MB/s 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.773 01:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.031 256+0 records in 00:07:27.031 256+0 records out 00:07:27.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325593 s, 32.2 MB/s 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.032 01:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.291 01:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.549 01:17:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.549 01:17:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.808 01:17:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:28.066 [2024-07-21 01:17:13.322478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.325 [2024-07-21 01:17:13.395036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.325 [2024-07-21 01:17:13.395037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.325 [2024-07-21 01:17:13.469903] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:28.325 [2024-07-21 01:17:13.469983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.858 01:17:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.858 spdk_app_start Round 1 00:07:30.858 01:17:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:30.858 01:17:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75413 /var/tmp/spdk-nbd.sock 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75413 ']' 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.858 01:17:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.118 01:17:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.118 01:17:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:31.118 01:17:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.118 Malloc0 00:07:31.377 01:17:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.377 Malloc1 00:07:31.377 01:17:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.377 01:17:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:31.637 /dev/nbd0 00:07:31.637 01:17:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:31.637 01:17:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.637 1+0 records in 00:07:31.637 1+0 records out 00:07:31.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256837 s, 15.9 MB/s 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:31.637 01:17:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:31.637 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.637 01:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.637 01:17:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.896 /dev/nbd1 00:07:31.896 01:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.896 01:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.896 1+0 records in 00:07:31.896 1+0 records out 00:07:31.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603614 s, 6.8 MB/s 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.896 01:17:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:31.897 01:17:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.897 01:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:31.897 01:17:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:31.897 01:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.897 01:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.897 01:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.897 01:17:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.897 01:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.156 { 00:07:32.156 "nbd_device": "/dev/nbd0", 00:07:32.156 "bdev_name": "Malloc0" 00:07:32.156 }, 00:07:32.156 { 00:07:32.156 "nbd_device": "/dev/nbd1", 00:07:32.156 "bdev_name": "Malloc1" 00:07:32.156 } 00:07:32.156 ]' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.156 { 00:07:32.156 "nbd_device": "/dev/nbd0", 00:07:32.156 "bdev_name": "Malloc0" 00:07:32.156 }, 00:07:32.156 { 00:07:32.156 "nbd_device": "/dev/nbd1", 00:07:32.156 "bdev_name": "Malloc1" 00:07:32.156 } 00:07:32.156 ]' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.156 /dev/nbd1' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.156 /dev/nbd1' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.156 256+0 records in 00:07:32.156 256+0 records out 00:07:32.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112219 s, 93.4 MB/s 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.156 256+0 records in 00:07:32.156 256+0 records out 00:07:32.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271997 s, 38.6 MB/s 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.156 01:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.415 256+0 records in 00:07:32.415 256+0 records out 00:07:32.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293709 s, 35.7 MB/s 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.415 01:17:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.675 01:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.934 01:17:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.934 01:17:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.192 01:17:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.451 [2024-07-21 01:17:18.680155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.451 [2024-07-21 01:17:18.752206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.451 [2024-07-21 01:17:18.752228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.710 [2024-07-21 01:17:18.826596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.710 [2024-07-21 01:17:18.826674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.241 01:17:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.241 spdk_app_start Round 2 00:07:36.241 01:17:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:36.241 01:17:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75413 /var/tmp/spdk-nbd.sock 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75413 ']' 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.241 01:17:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 01:17:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.500 01:17:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:36.500 01:17:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.500 Malloc0 00:07:36.500 01:17:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.772 Malloc1 00:07:36.772 01:17:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.772 01:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:37.060 /dev/nbd0 00:07:37.060 01:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.060 01:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.060 1+0 records in 00:07:37.060 1+0 records out 00:07:37.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353117 s, 11.6 MB/s 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:37.060 01:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:37.060 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.060 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.060 01:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:37.319 /dev/nbd1 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.319 1+0 records in 00:07:37.319 1+0 records out 00:07:37.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388488 s, 10.5 MB/s 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:37.319 01:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.319 01:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.579 { 00:07:37.579 "nbd_device": "/dev/nbd0", 00:07:37.579 "bdev_name": "Malloc0" 00:07:37.579 }, 00:07:37.579 { 00:07:37.579 "nbd_device": "/dev/nbd1", 00:07:37.579 "bdev_name": "Malloc1" 00:07:37.579 } 00:07:37.579 ]' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.579 { 00:07:37.579 "nbd_device": "/dev/nbd0", 00:07:37.579 "bdev_name": "Malloc0" 00:07:37.579 }, 00:07:37.579 { 00:07:37.579 "nbd_device": "/dev/nbd1", 00:07:37.579 "bdev_name": "Malloc1" 00:07:37.579 } 00:07:37.579 ]' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.579 /dev/nbd1' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.579 /dev/nbd1' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:37.579 256+0 records in 00:07:37.579 256+0 records out 00:07:37.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122376 s, 85.7 MB/s 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.579 256+0 records in 00:07:37.579 256+0 records out 00:07:37.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254713 s, 41.2 MB/s 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.579 256+0 records in 00:07:37.579 256+0 records out 00:07:37.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373751 s, 28.1 MB/s 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.579 01:17:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.839 01:17:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.098 01:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.099 01:17:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.099 01:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.358 01:17:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.358 01:17:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:38.617 01:17:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:38.875 [2024-07-21 01:17:24.027432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.875 [2024-07-21 01:17:24.093897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.875 [2024-07-21 01:17:24.093902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.875 [2024-07-21 01:17:24.169129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:38.875 [2024-07-21 01:17:24.169210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:42.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.164 01:17:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75413 /var/tmp/spdk-nbd.sock 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75413 ']' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:42.164 01:17:26 event.app_repeat -- event/event.sh@39 -- # killprocess 75413 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75413 ']' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75413 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75413 00:07:42.164 killing process with pid 75413 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75413' 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75413 00:07:42.164 01:17:26 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75413 00:07:42.164 spdk_app_start is called in Round 0. 00:07:42.164 Shutdown signal received, stop current app iteration 00:07:42.164 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:42.164 spdk_app_start is called in Round 1. 00:07:42.164 Shutdown signal received, stop current app iteration 00:07:42.164 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:42.164 spdk_app_start is called in Round 2. 00:07:42.164 Shutdown signal received, stop current app iteration 00:07:42.164 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:42.164 spdk_app_start is called in Round 3. 00:07:42.164 Shutdown signal received, stop current app iteration 00:07:42.164 01:17:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:42.164 01:17:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:42.164 00:07:42.164 real 0m17.289s 00:07:42.164 user 0m36.731s 00:07:42.164 sys 0m3.323s 00:07:42.164 01:17:27 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.164 01:17:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.164 ************************************ 00:07:42.164 END TEST app_repeat 00:07:42.164 ************************************ 00:07:42.164 01:17:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:42.164 01:17:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:42.164 01:17:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.164 01:17:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.164 01:17:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.164 ************************************ 00:07:42.164 START TEST cpu_locks 00:07:42.164 ************************************ 00:07:42.164 01:17:27 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:42.423 * Looking for test storage... 00:07:42.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:42.423 01:17:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:42.423 01:17:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:42.423 01:17:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:42.423 01:17:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:42.423 01:17:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:42.423 01:17:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.423 01:17:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.423 ************************************ 00:07:42.423 START TEST default_locks 00:07:42.423 ************************************ 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75831 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75831 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75831 ']' 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.423 01:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.423 [2024-07-21 01:17:27.616084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:42.423 [2024-07-21 01:17:27.616218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75831 ] 00:07:42.682 [2024-07-21 01:17:27.786218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.682 [2024-07-21 01:17:27.859255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.250 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:43.250 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:43.250 01:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75831 00:07:43.250 01:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75831 00:07:43.250 01:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.508 01:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75831 00:07:43.508 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75831 ']' 00:07:43.508 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75831 00:07:43.508 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75831 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.766 killing process with pid 75831 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75831' 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75831 00:07:43.766 01:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75831 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75831 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75831 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75831 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75831 ']' 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.333 ERROR: process (pid: 75831) is no longer running 00:07:44.333 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75831) - No such process 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:44.333 00:07:44.333 real 0m1.938s 00:07:44.333 user 0m1.759s 00:07:44.333 sys 0m0.755s 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.333 01:17:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.333 ************************************ 00:07:44.333 END TEST default_locks 00:07:44.333 ************************************ 00:07:44.333 01:17:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:44.333 01:17:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:44.333 01:17:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.333 01:17:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.333 ************************************ 00:07:44.333 START TEST default_locks_via_rpc 00:07:44.333 ************************************ 00:07:44.333 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:44.333 01:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75884 00:07:44.333 01:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75884 00:07:44.333 01:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75884 ']' 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:44.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:44.334 01:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.334 [2024-07-21 01:17:29.643650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:44.334 [2024-07-21 01:17:29.643799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75884 ] 00:07:44.593 [2024-07-21 01:17:29.817908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.593 [2024-07-21 01:17:29.887412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75884 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75884 00:07:45.161 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75884 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75884 ']' 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75884 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75884 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.729 killing process with pid 75884 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75884' 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75884 00:07:45.729 01:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75884 00:07:46.298 00:07:46.298 real 0m1.980s 00:07:46.298 user 0m1.785s 00:07:46.298 sys 0m0.781s 00:07:46.298 01:17:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.298 01:17:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 ************************************ 00:07:46.298 END TEST default_locks_via_rpc 00:07:46.298 ************************************ 00:07:46.298 01:17:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:46.298 01:17:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.298 01:17:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.298 01:17:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 ************************************ 00:07:46.298 START TEST non_locking_app_on_locked_coremask 00:07:46.298 ************************************ 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75936 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75936 /var/tmp/spdk.sock 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75936 ']' 00:07:46.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.298 01:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.556 [2024-07-21 01:17:31.693057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:46.556 [2024-07-21 01:17:31.693188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75936 ] 00:07:46.556 [2024-07-21 01:17:31.862723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.817 [2024-07-21 01:17:31.932344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.383 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75951 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75951 /var/tmp/spdk2.sock 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75951 ']' 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.384 01:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.384 [2024-07-21 01:17:32.563791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:47.384 [2024-07-21 01:17:32.563955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75951 ] 00:07:47.642 [2024-07-21 01:17:32.734351] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.642 [2024-07-21 01:17:32.734424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.642 [2024-07-21 01:17:32.877769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.208 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.208 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:48.208 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75936 00:07:48.208 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75936 00:07:48.208 01:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75936 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75936 ']' 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75936 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75936 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75936' 00:07:49.141 killing process with pid 75936 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75936 00:07:49.141 01:17:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75936 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75951 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75951 ']' 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75951 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75951 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:50.587 killing process with pid 75951 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75951' 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75951 00:07:50.587 01:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75951 00:07:51.154 00:07:51.154 real 0m4.575s 00:07:51.154 user 0m4.509s 00:07:51.154 sys 0m1.480s 00:07:51.154 01:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.154 01:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.154 ************************************ 00:07:51.154 END TEST non_locking_app_on_locked_coremask 00:07:51.154 ************************************ 00:07:51.154 01:17:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:51.154 01:17:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.154 01:17:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.154 01:17:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.154 ************************************ 00:07:51.154 START TEST locking_app_on_unlocked_coremask 00:07:51.154 ************************************ 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76027 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76027 /var/tmp/spdk.sock 00:07:51.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76027 ']' 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.154 01:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.154 [2024-07-21 01:17:36.340887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:51.154 [2024-07-21 01:17:36.341015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76027 ] 00:07:51.413 [2024-07-21 01:17:36.511776] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.413 [2024-07-21 01:17:36.511855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.413 [2024-07-21 01:17:36.580322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76037 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76037 /var/tmp/spdk2.sock 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76037 ']' 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.978 01:17:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.979 [2024-07-21 01:17:37.240931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:51.979 [2024-07-21 01:17:37.241400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76037 ] 00:07:52.238 [2024-07-21 01:17:37.414828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.496 [2024-07-21 01:17:37.563074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.076 01:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.076 01:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:53.076 01:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76037 00:07:53.076 01:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76037 00:07:53.076 01:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76027 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76027 ']' 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76027 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76027 00:07:54.012 killing process with pid 76027 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76027' 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76027 00:07:54.012 01:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76027 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76037 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76037 ']' 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76037 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76037 00:07:55.400 killing process with pid 76037 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76037' 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76037 00:07:55.400 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76037 00:07:55.659 00:07:55.659 real 0m4.645s 00:07:55.659 user 0m4.534s 00:07:55.659 sys 0m1.556s 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.659 ************************************ 00:07:55.659 END TEST locking_app_on_unlocked_coremask 00:07:55.659 ************************************ 00:07:55.659 01:17:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:55.659 01:17:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:55.659 01:17:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.659 01:17:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.659 ************************************ 00:07:55.659 START TEST locking_app_on_locked_coremask 00:07:55.659 ************************************ 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76112 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76112 /var/tmp/spdk.sock 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76112 ']' 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.659 01:17:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.917 [2024-07-21 01:17:41.064933] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:55.918 [2024-07-21 01:17:41.065084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76112 ] 00:07:56.177 [2024-07-21 01:17:41.237582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.177 [2024-07-21 01:17:41.299628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76128 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76128 /var/tmp/spdk2.sock 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76128 /var/tmp/spdk2.sock 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76128 /var/tmp/spdk2.sock 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76128 ']' 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:56.745 01:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.745 [2024-07-21 01:17:41.924131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:56.745 [2024-07-21 01:17:41.924484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76128 ] 00:07:57.005 [2024-07-21 01:17:42.095464] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76112 has claimed it. 00:07:57.005 [2024-07-21 01:17:42.095530] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:57.264 ERROR: process (pid: 76128) is no longer running 00:07:57.264 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76128) - No such process 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76112 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76112 00:07:57.264 01:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76112 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76112 ']' 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76112 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76112 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.831 killing process with pid 76112 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76112' 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76112 00:07:57.831 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76112 00:07:58.397 00:07:58.397 real 0m2.686s 00:07:58.397 user 0m2.684s 00:07:58.397 sys 0m0.925s 00:07:58.397 ************************************ 00:07:58.397 END TEST locking_app_on_locked_coremask 00:07:58.397 ************************************ 00:07:58.397 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.397 01:17:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.656 01:17:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:58.656 01:17:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.656 01:17:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.656 01:17:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.656 ************************************ 00:07:58.656 START TEST locking_overlapped_coremask 00:07:58.656 ************************************ 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76181 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76181 /var/tmp/spdk.sock 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76181 ']' 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.656 01:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.656 [2024-07-21 01:17:43.833812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:58.656 [2024-07-21 01:17:43.833957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76181 ] 00:07:58.915 [2024-07-21 01:17:44.008998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.915 [2024-07-21 01:17:44.080297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.915 [2024-07-21 01:17:44.080418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.915 [2024-07-21 01:17:44.080549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76199 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76199 /var/tmp/spdk2.sock 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76199 /var/tmp/spdk2.sock 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76199 /var/tmp/spdk2.sock 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76199 ']' 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.483 01:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.483 [2024-07-21 01:17:44.718424] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:59.483 [2024-07-21 01:17:44.718592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76199 ] 00:07:59.743 [2024-07-21 01:17:44.887709] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76181 has claimed it. 00:07:59.743 [2024-07-21 01:17:44.887793] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.002 ERROR: process (pid: 76199) is no longer running 00:08:00.002 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76199) - No such process 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76181 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76181 ']' 00:08:00.002 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76181 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76181 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76181' 00:08:00.260 killing process with pid 76181 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76181 00:08:00.260 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76181 00:08:00.828 00:08:00.828 real 0m2.242s 00:08:00.828 user 0m5.559s 00:08:00.828 sys 0m0.743s 00:08:00.828 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.828 01:17:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.828 ************************************ 00:08:00.828 END TEST locking_overlapped_coremask 00:08:00.828 ************************************ 00:08:00.828 01:17:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:00.828 01:17:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:00.828 01:17:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.828 01:17:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.828 ************************************ 00:08:00.828 START TEST locking_overlapped_coremask_via_rpc 00:08:00.828 ************************************ 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76241 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76241 /var/tmp/spdk.sock 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76241 ']' 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:00.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:00.828 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.087 [2024-07-21 01:17:46.156801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:01.087 [2024-07-21 01:17:46.156975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76241 ] 00:08:01.087 [2024-07-21 01:17:46.324719] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.087 [2024-07-21 01:17:46.324776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.346 [2024-07-21 01:17:46.399590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.346 [2024-07-21 01:17:46.399635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.346 [2024-07-21 01:17:46.399734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76259 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76259 /var/tmp/spdk2.sock 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76259 ']' 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.917 01:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.917 [2024-07-21 01:17:47.026600] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:01.917 [2024-07-21 01:17:47.027047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76259 ] 00:08:01.917 [2024-07-21 01:17:47.196497] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.917 [2024-07-21 01:17:47.196584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.196 [2024-07-21 01:17:47.355554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.196 [2024-07-21 01:17:47.355639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.196 [2024-07-21 01:17:47.355757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.764 01:17:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.764 [2024-07-21 01:17:48.008046] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76241 has claimed it. 00:08:02.764 request: 00:08:02.764 { 00:08:02.764 "method": "framework_enable_cpumask_locks", 00:08:02.764 "req_id": 1 00:08:02.764 } 00:08:02.764 Got JSON-RPC error response 00:08:02.764 response: 00:08:02.764 { 00:08:02.764 "code": -32603, 00:08:02.764 "message": "Failed to claim CPU core: 2" 00:08:02.764 } 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76241 /var/tmp/spdk.sock 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76241 ']' 00:08:02.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.764 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76259 /var/tmp/spdk2.sock 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76259 ']' 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.024 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:03.283 00:08:03.283 real 0m2.371s 00:08:03.283 user 0m0.965s 00:08:03.283 sys 0m0.194s 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.283 01:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.283 ************************************ 00:08:03.283 END TEST locking_overlapped_coremask_via_rpc 00:08:03.283 ************************************ 00:08:03.283 01:17:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:03.283 01:17:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76241 ]] 00:08:03.283 01:17:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76241 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76241 ']' 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76241 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76241 00:08:03.283 killing process with pid 76241 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76241' 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76241 00:08:03.283 01:17:48 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76241 00:08:03.852 01:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76259 ]] 00:08:03.852 01:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76259 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76259 ']' 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76259 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76259 00:08:03.852 killing process with pid 76259 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76259' 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76259 00:08:03.852 01:17:49 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76259 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.788 Process with pid 76241 is not found 00:08:04.788 Process with pid 76259 is not found 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76241 ]] 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76241 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76241 ']' 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76241 00:08:04.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76241) - No such process 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76241 is not found' 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76259 ]] 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76259 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76259 ']' 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76259 00:08:04.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76259) - No such process 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76259 is not found' 00:08:04.788 01:17:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.788 00:08:04.788 real 0m22.408s 00:08:04.788 user 0m34.458s 00:08:04.788 sys 0m7.830s 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.788 01:17:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.788 ************************************ 00:08:04.788 END TEST cpu_locks 00:08:04.788 ************************************ 00:08:04.788 00:08:04.788 real 0m50.721s 00:08:04.788 user 1m31.625s 00:08:04.788 sys 0m12.484s 00:08:04.788 01:17:49 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.788 01:17:49 event -- common/autotest_common.sh@10 -- # set +x 00:08:04.788 ************************************ 00:08:04.788 END TEST event 00:08:04.788 ************************************ 00:08:04.788 01:17:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.788 01:17:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:04.788 01:17:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.788 01:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.788 ************************************ 00:08:04.788 START TEST thread 00:08:04.788 ************************************ 00:08:04.788 01:17:49 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.788 * Looking for test storage... 00:08:04.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:04.788 01:17:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.788 01:17:50 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:04.788 01:17:50 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.788 01:17:50 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.788 ************************************ 00:08:04.788 START TEST thread_poller_perf 00:08:04.788 ************************************ 00:08:04.788 01:17:50 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.788 [2024-07-21 01:17:50.092497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:04.788 [2024-07-21 01:17:50.092884] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76389 ] 00:08:05.067 [2024-07-21 01:17:50.254925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.067 [2024-07-21 01:17:50.324324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.067 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:06.442 ====================================== 00:08:06.442 busy:2501186318 (cyc) 00:08:06.442 total_run_count: 405000 00:08:06.442 tsc_hz: 2490000000 (cyc) 00:08:06.442 ====================================== 00:08:06.442 poller_cost: 6175 (cyc), 2479 (nsec) 00:08:06.442 00:08:06.442 real 0m1.414s 00:08:06.442 ************************************ 00:08:06.442 END TEST thread_poller_perf 00:08:06.442 ************************************ 00:08:06.442 user 0m1.180s 00:08:06.442 sys 0m0.127s 00:08:06.442 01:17:51 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.442 01:17:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 01:17:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.442 01:17:51 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:06.442 01:17:51 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.442 01:17:51 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.442 ************************************ 00:08:06.442 START TEST thread_poller_perf 00:08:06.442 ************************************ 00:08:06.442 01:17:51 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.442 [2024-07-21 01:17:51.576019] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:06.442 [2024-07-21 01:17:51.576172] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76420 ] 00:08:06.442 [2024-07-21 01:17:51.748579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.701 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:06.701 [2024-07-21 01:17:51.818010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.635 ====================================== 00:08:07.635 busy:2494081978 (cyc) 00:08:07.635 total_run_count: 5329000 00:08:07.635 tsc_hz: 2490000000 (cyc) 00:08:07.635 ====================================== 00:08:07.635 poller_cost: 468 (cyc), 187 (nsec) 00:08:07.893 00:08:07.893 real 0m1.419s 00:08:07.893 user 0m1.190s 00:08:07.893 sys 0m0.122s 00:08:07.893 ************************************ 00:08:07.893 END TEST thread_poller_perf 00:08:07.893 ************************************ 00:08:07.893 01:17:52 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.893 01:17:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.893 01:17:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:07.893 00:08:07.893 real 0m3.123s 00:08:07.893 user 0m2.476s 00:08:07.893 sys 0m0.424s 00:08:07.893 01:17:53 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.893 01:17:53 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.893 ************************************ 00:08:07.893 END TEST thread 00:08:07.893 ************************************ 00:08:07.893 01:17:53 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:07.893 01:17:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:07.893 01:17:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.893 01:17:53 -- common/autotest_common.sh@10 -- # set +x 00:08:07.893 ************************************ 00:08:07.893 START TEST accel 00:08:07.893 ************************************ 00:08:07.893 01:17:53 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:07.893 * Looking for test storage... 00:08:08.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:08.150 01:17:53 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:08.150 01:17:53 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:08.150 01:17:53 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:08.150 01:17:53 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76501 00:08:08.150 01:17:53 accel -- accel/accel.sh@63 -- # waitforlisten 76501 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@827 -- # '[' -z 76501 ']' 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.150 01:17:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.150 01:17:53 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:08.150 01:17:53 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:08.150 01:17:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.150 01:17:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.150 01:17:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.150 01:17:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.150 01:17:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.150 01:17:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:08.150 01:17:53 accel -- accel/accel.sh@41 -- # jq -r . 00:08:08.150 [2024-07-21 01:17:53.319083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:08.150 [2024-07-21 01:17:53.319419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76501 ] 00:08:08.408 [2024-07-21 01:17:53.492005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.408 [2024-07-21 01:17:53.562674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.975 01:17:54 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:08.975 01:17:54 accel -- common/autotest_common.sh@860 -- # return 0 00:08:08.975 01:17:54 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:08.975 01:17:54 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:08.975 01:17:54 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:08.975 01:17:54 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:08.975 01:17:54 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:08.975 01:17:54 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:08.975 01:17:54 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:08.975 01:17:54 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.975 01:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 01:17:54 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.975 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.975 01:17:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # IFS== 00:08:08.975 01:17:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:08.976 01:17:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:08.976 01:17:54 accel -- accel/accel.sh@75 -- # killprocess 76501 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@946 -- # '[' -z 76501 ']' 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@950 -- # kill -0 76501 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@951 -- # uname 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76501 00:08:08.976 killing process with pid 76501 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76501' 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@965 -- # kill 76501 00:08:08.976 01:17:54 accel -- common/autotest_common.sh@970 -- # wait 76501 00:08:09.912 01:17:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:09.912 01:17:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:09.912 01:17:54 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.912 01:17:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.912 01:17:54 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.912 01:17:54 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:09.912 01:17:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:09.912 01:17:54 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.912 01:17:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:09.912 01:17:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:09.912 01:17:55 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:09.912 01:17:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.912 01:17:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.912 ************************************ 00:08:09.912 START TEST accel_missing_filename 00:08:09.912 ************************************ 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.912 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:09.912 01:17:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:09.912 [2024-07-21 01:17:55.104106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:09.912 [2024-07-21 01:17:55.104230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76560 ] 00:08:10.171 [2024-07-21 01:17:55.263247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.171 [2024-07-21 01:17:55.332526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.171 [2024-07-21 01:17:55.409885] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.431 [2024-07-21 01:17:55.527716] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:10.431 A filename is required. 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.431 00:08:10.431 real 0m0.623s 00:08:10.431 user 0m0.350s 00:08:10.431 sys 0m0.213s 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.431 01:17:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:10.431 ************************************ 00:08:10.431 END TEST accel_missing_filename 00:08:10.431 ************************************ 00:08:10.431 01:17:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:10.431 01:17:55 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:10.431 01:17:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.431 01:17:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.689 ************************************ 00:08:10.689 START TEST accel_compress_verify 00:08:10.689 ************************************ 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.689 01:17:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:10.689 01:17:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:10.689 [2024-07-21 01:17:55.801910] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:10.689 [2024-07-21 01:17:55.802073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76582 ] 00:08:10.689 [2024-07-21 01:17:55.974761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.947 [2024-07-21 01:17:56.044603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.947 [2024-07-21 01:17:56.122114] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.947 [2024-07-21 01:17:56.238237] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:11.206 00:08:11.206 Compression does not support the verify option, aborting. 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.206 00:08:11.206 real 0m0.638s 00:08:11.206 user 0m0.354s 00:08:11.206 sys 0m0.224s 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.206 ************************************ 00:08:11.206 END TEST accel_compress_verify 00:08:11.206 ************************************ 00:08:11.206 01:17:56 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:11.206 01:17:56 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:11.206 01:17:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:11.206 01:17:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.206 01:17:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.206 ************************************ 00:08:11.206 START TEST accel_wrong_workload 00:08:11.206 ************************************ 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.206 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.206 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:11.207 01:17:56 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:11.207 Unsupported workload type: foobar 00:08:11.207 [2024-07-21 01:17:56.507405] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:11.465 accel_perf options: 00:08:11.465 [-h help message] 00:08:11.465 [-q queue depth per core] 00:08:11.465 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.465 [-T number of threads per core 00:08:11.465 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.465 [-t time in seconds] 00:08:11.465 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.465 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.465 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.465 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.465 [-S for crc32c workload, use this seed value (default 0) 00:08:11.465 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.465 [-f for fill workload, use this BYTE value (default 255) 00:08:11.465 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.465 [-y verify result if this switch is on] 00:08:11.465 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.465 Can be used to spread operations across a wider range of memory. 00:08:11.465 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:11.465 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.466 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.466 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.466 00:08:11.466 real 0m0.087s 00:08:11.466 user 0m0.082s 00:08:11.466 sys 0m0.045s 00:08:11.466 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.466 01:17:56 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:11.466 ************************************ 00:08:11.466 END TEST accel_wrong_workload 00:08:11.466 ************************************ 00:08:11.466 01:17:56 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.466 ************************************ 00:08:11.466 START TEST accel_negative_buffers 00:08:11.466 ************************************ 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:11.466 01:17:56 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:11.466 -x option must be non-negative. 00:08:11.466 [2024-07-21 01:17:56.661378] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:11.466 accel_perf options: 00:08:11.466 [-h help message] 00:08:11.466 [-q queue depth per core] 00:08:11.466 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.466 [-T number of threads per core 00:08:11.466 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.466 [-t time in seconds] 00:08:11.466 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.466 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.466 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.466 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.466 [-S for crc32c workload, use this seed value (default 0) 00:08:11.466 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.466 [-f for fill workload, use this BYTE value (default 255) 00:08:11.466 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.466 [-y verify result if this switch is on] 00:08:11.466 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.466 Can be used to spread operations across a wider range of memory. 00:08:11.466 ************************************ 00:08:11.466 END TEST accel_negative_buffers 00:08:11.466 ************************************ 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.466 00:08:11.466 real 0m0.082s 00:08:11.466 user 0m0.070s 00:08:11.466 sys 0m0.049s 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.466 01:17:56 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:11.466 01:17:56 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.466 01:17:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.466 ************************************ 00:08:11.466 START TEST accel_crc32c 00:08:11.466 ************************************ 00:08:11.466 01:17:56 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:11.466 01:17:56 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:11.724 [2024-07-21 01:17:56.814522] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:11.724 [2024-07-21 01:17:56.814652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76647 ] 00:08:11.724 [2024-07-21 01:17:56.978811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.983 [2024-07-21 01:17:57.050653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:11.983 01:17:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.368 01:17:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.368 00:08:13.368 real 0m1.619s 00:08:13.368 user 0m1.321s 00:08:13.368 sys 0m0.214s 00:08:13.368 01:17:58 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.368 ************************************ 00:08:13.368 END TEST accel_crc32c 00:08:13.368 ************************************ 00:08:13.368 01:17:58 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:13.368 01:17:58 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:13.368 01:17:58 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:13.368 01:17:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.368 01:17:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.368 ************************************ 00:08:13.368 START TEST accel_crc32c_C2 00:08:13.368 ************************************ 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.368 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:13.368 [2024-07-21 01:17:58.501485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:13.368 [2024-07-21 01:17:58.501613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76688 ] 00:08:13.368 [2024-07-21 01:17:58.673189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.626 [2024-07-21 01:17:58.747219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:13.626 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.627 01:17:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.065 00:08:15.065 real 0m1.629s 00:08:15.065 user 0m1.337s 00:08:15.065 sys 0m0.207s 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.065 ************************************ 00:08:15.065 END TEST accel_crc32c_C2 00:08:15.065 ************************************ 00:08:15.065 01:18:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:15.065 01:18:00 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:15.065 01:18:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:15.065 01:18:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.065 01:18:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.065 ************************************ 00:08:15.065 START TEST accel_copy 00:08:15.065 ************************************ 00:08:15.065 01:18:00 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:15.065 01:18:00 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:15.065 [2024-07-21 01:18:00.205442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:15.065 [2024-07-21 01:18:00.205563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76724 ] 00:08:15.323 [2024-07-21 01:18:00.376307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.323 [2024-07-21 01:18:00.449211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.323 01:18:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.693 01:18:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:16.694 01:18:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.694 00:08:16.694 real 0m1.634s 00:08:16.694 user 0m1.330s 00:08:16.694 sys 0m0.217s 00:08:16.694 01:18:01 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.694 ************************************ 00:08:16.694 END TEST accel_copy 00:08:16.694 ************************************ 00:08:16.694 01:18:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.694 01:18:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.694 01:18:01 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:16.694 01:18:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.694 01:18:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.694 ************************************ 00:08:16.694 START TEST accel_fill 00:08:16.694 ************************************ 00:08:16.694 01:18:01 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:16.694 01:18:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:16.694 [2024-07-21 01:18:01.914896] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:16.694 [2024-07-21 01:18:01.915027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76759 ] 00:08:16.952 [2024-07-21 01:18:02.087280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.952 [2024-07-21 01:18:02.156121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.952 01:18:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:18.324 01:18:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.324 00:08:18.324 real 0m1.636s 00:08:18.324 user 0m1.325s 00:08:18.324 sys 0m0.219s 00:08:18.324 ************************************ 00:08:18.324 END TEST accel_fill 00:08:18.324 ************************************ 00:08:18.324 01:18:03 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.324 01:18:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:18.324 01:18:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:18.324 01:18:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:18.324 01:18:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.324 01:18:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.324 ************************************ 00:08:18.324 START TEST accel_copy_crc32c 00:08:18.324 ************************************ 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:18.324 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:18.324 [2024-07-21 01:18:03.628038] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:18.324 [2024-07-21 01:18:03.628174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76800 ] 00:08:18.582 [2024-07-21 01:18:03.800669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.582 [2024-07-21 01:18:03.874761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.840 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.841 01:18:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.212 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.213 00:08:20.213 real 0m1.631s 00:08:20.213 user 0m1.310s 00:08:20.213 sys 0m0.233s 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.213 ************************************ 00:08:20.213 END TEST accel_copy_crc32c 00:08:20.213 ************************************ 00:08:20.213 01:18:05 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:20.213 01:18:05 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:20.213 01:18:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:20.213 01:18:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.213 01:18:05 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.213 ************************************ 00:08:20.213 START TEST accel_copy_crc32c_C2 00:08:20.213 ************************************ 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:20.213 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:20.213 [2024-07-21 01:18:05.324160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:20.213 [2024-07-21 01:18:05.324541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76836 ] 00:08:20.213 [2024-07-21 01:18:05.494642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.471 [2024-07-21 01:18:05.568710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.471 01:18:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:21.842 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:21.843 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.843 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:21.843 01:18:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.843 00:08:21.843 real 0m1.623s 00:08:21.843 user 0m1.312s 00:08:21.843 sys 0m0.217s 00:08:21.843 01:18:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.843 ************************************ 00:08:21.843 END TEST accel_copy_crc32c_C2 00:08:21.843 ************************************ 00:08:21.843 01:18:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:21.843 01:18:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:21.843 01:18:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:21.843 01:18:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.843 01:18:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.843 ************************************ 00:08:21.843 START TEST accel_dualcast 00:08:21.843 ************************************ 00:08:21.843 01:18:06 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:21.843 01:18:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:21.843 [2024-07-21 01:18:07.014896] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:21.843 [2024-07-21 01:18:07.015014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76871 ] 00:08:22.101 [2024-07-21 01:18:07.183997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.101 [2024-07-21 01:18:07.253519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.101 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:22.102 01:18:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:23.477 ************************************ 00:08:23.477 END TEST accel_dualcast 00:08:23.477 ************************************ 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:23.477 01:18:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.477 00:08:23.477 real 0m1.613s 00:08:23.477 user 0m1.310s 00:08:23.477 sys 0m0.215s 00:08:23.477 01:18:08 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.477 01:18:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 01:18:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:23.477 01:18:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:23.477 01:18:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.477 01:18:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 ************************************ 00:08:23.477 START TEST accel_compare 00:08:23.477 ************************************ 00:08:23.477 01:18:08 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:23.477 01:18:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:23.477 [2024-07-21 01:18:08.693869] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:23.477 [2024-07-21 01:18:08.694010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76907 ] 00:08:23.736 [2024-07-21 01:18:08.865006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.736 [2024-07-21 01:18:08.936028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.736 01:18:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.737 01:18:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.737 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.737 01:18:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.111 ************************************ 00:08:25.111 END TEST accel_compare 00:08:25.111 ************************************ 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:25.111 01:18:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.111 00:08:25.111 real 0m1.625s 00:08:25.111 user 0m1.317s 00:08:25.111 sys 0m0.223s 00:08:25.111 01:18:10 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.111 01:18:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:25.111 01:18:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:25.111 01:18:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:25.111 01:18:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.111 01:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.111 ************************************ 00:08:25.111 START TEST accel_xor 00:08:25.111 ************************************ 00:08:25.111 01:18:10 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:25.111 01:18:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:25.111 01:18:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:25.111 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:25.112 01:18:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:25.112 [2024-07-21 01:18:10.395128] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:25.112 [2024-07-21 01:18:10.395469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76948 ] 00:08:25.369 [2024-07-21 01:18:10.568234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.369 [2024-07-21 01:18:10.637642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:25.627 01:18:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:27.038 01:18:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.038 00:08:27.038 real 0m1.633s 00:08:27.038 user 0m1.326s 00:08:27.038 sys 0m0.220s 00:08:27.038 01:18:11 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.038 01:18:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:27.038 ************************************ 00:08:27.038 END TEST accel_xor 00:08:27.038 ************************************ 00:08:27.038 01:18:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:27.038 01:18:12 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:27.038 01:18:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.038 01:18:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.038 ************************************ 00:08:27.038 START TEST accel_xor 00:08:27.038 ************************************ 00:08:27.038 01:18:12 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:27.038 01:18:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:27.038 [2024-07-21 01:18:12.097689] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:27.038 [2024-07-21 01:18:12.098053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76983 ] 00:08:27.038 [2024-07-21 01:18:12.270018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.038 [2024-07-21 01:18:12.342263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.310 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:27.311 01:18:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.691 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:28.692 ************************************ 00:08:28.692 END TEST accel_xor 00:08:28.692 ************************************ 00:08:28.692 01:18:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.692 00:08:28.692 real 0m1.639s 00:08:28.692 user 0m1.326s 00:08:28.692 sys 0m0.224s 00:08:28.692 01:18:13 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.692 01:18:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:28.692 01:18:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:28.692 01:18:13 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:28.692 01:18:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.692 01:18:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.692 ************************************ 00:08:28.692 START TEST accel_dif_verify 00:08:28.692 ************************************ 00:08:28.692 01:18:13 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:28.692 01:18:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:28.692 [2024-07-21 01:18:13.815990] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:28.692 [2024-07-21 01:18:13.816133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77019 ] 00:08:28.692 [2024-07-21 01:18:13.989321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.948 [2024-07-21 01:18:14.060381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.948 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:28.949 01:18:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:30.320 01:18:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.320 00:08:30.320 real 0m1.629s 00:08:30.320 user 0m1.317s 00:08:30.320 sys 0m0.227s 00:08:30.320 01:18:15 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.320 01:18:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 ************************************ 00:08:30.320 END TEST accel_dif_verify 00:08:30.320 ************************************ 00:08:30.320 01:18:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:30.320 01:18:15 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:30.320 01:18:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.320 01:18:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.320 ************************************ 00:08:30.320 START TEST accel_dif_generate 00:08:30.320 ************************************ 00:08:30.320 01:18:15 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:30.320 01:18:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:30.320 [2024-07-21 01:18:15.514406] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:30.321 [2024-07-21 01:18:15.514685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77060 ] 00:08:30.578 [2024-07-21 01:18:15.680783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.578 [2024-07-21 01:18:15.746279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.578 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:30.579 01:18:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:31.961 01:18:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.961 00:08:31.961 real 0m1.602s 00:08:31.961 user 0m1.308s 00:08:31.961 sys 0m0.208s 00:08:31.961 01:18:17 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.961 01:18:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:31.961 ************************************ 00:08:31.961 END TEST accel_dif_generate 00:08:31.961 ************************************ 00:08:31.961 01:18:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:31.961 01:18:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:31.961 01:18:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.961 01:18:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.961 ************************************ 00:08:31.961 START TEST accel_dif_generate_copy 00:08:31.961 ************************************ 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:31.961 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:31.961 [2024-07-21 01:18:17.196677] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:31.961 [2024-07-21 01:18:17.196801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77090 ] 00:08:32.220 [2024-07-21 01:18:17.373973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.220 [2024-07-21 01:18:17.451528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.479 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:32.480 01:18:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.855 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.856 00:08:33.856 real 0m1.628s 00:08:33.856 user 0m1.315s 00:08:33.856 sys 0m0.227s 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.856 01:18:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:33.856 ************************************ 00:08:33.856 END TEST accel_dif_generate_copy 00:08:33.856 ************************************ 00:08:33.856 01:18:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:33.856 01:18:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:33.856 01:18:18 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:33.856 01:18:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.856 01:18:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.856 ************************************ 00:08:33.856 START TEST accel_comp 00:08:33.856 ************************************ 00:08:33.856 01:18:18 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:33.856 01:18:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:33.856 [2024-07-21 01:18:18.895057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:33.856 [2024-07-21 01:18:18.895184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77131 ] 00:08:33.856 [2024-07-21 01:18:19.065970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.856 [2024-07-21 01:18:19.128461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:34.114 01:18:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:35.490 01:18:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.490 00:08:35.490 real 0m1.612s 00:08:35.490 user 0m1.313s 00:08:35.490 sys 0m0.213s 00:08:35.490 01:18:20 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.490 ************************************ 00:08:35.490 END TEST accel_comp 00:08:35.490 01:18:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:35.490 ************************************ 00:08:35.490 01:18:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.490 01:18:20 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:35.490 01:18:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.490 01:18:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.490 ************************************ 00:08:35.490 START TEST accel_decomp 00:08:35.490 ************************************ 00:08:35.490 01:18:20 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:35.490 01:18:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:35.490 [2024-07-21 01:18:20.576932] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:35.490 [2024-07-21 01:18:20.577056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77171 ] 00:08:35.490 [2024-07-21 01:18:20.748778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.748 [2024-07-21 01:18:20.812056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.748 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:35.749 01:18:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:37.121 01:18:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.121 00:08:37.121 real 0m1.618s 00:08:37.121 user 0m1.322s 00:08:37.121 sys 0m0.212s 00:08:37.121 01:18:22 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.121 ************************************ 00:08:37.121 END TEST accel_decomp 00:08:37.121 ************************************ 00:08:37.121 01:18:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:37.121 01:18:22 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:37.121 01:18:22 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:37.121 01:18:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.121 01:18:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.121 ************************************ 00:08:37.121 START TEST accel_decmop_full 00:08:37.121 ************************************ 00:08:37.121 01:18:22 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:37.121 01:18:22 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:37.121 [2024-07-21 01:18:22.274903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:37.121 [2024-07-21 01:18:22.275200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77202 ] 00:08:37.380 [2024-07-21 01:18:22.443899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.380 [2024-07-21 01:18:22.505159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:37.380 01:18:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.753 01:18:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.753 00:08:38.753 real 0m1.623s 00:08:38.753 user 0m1.331s 00:08:38.753 sys 0m0.206s 00:08:38.753 01:18:23 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.753 01:18:23 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:38.753 ************************************ 00:08:38.753 END TEST accel_decmop_full 00:08:38.753 ************************************ 00:08:38.753 01:18:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:38.753 01:18:23 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:38.753 01:18:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.753 01:18:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.753 ************************************ 00:08:38.753 START TEST accel_decomp_mcore 00:08:38.754 ************************************ 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:38.754 01:18:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:38.754 [2024-07-21 01:18:23.985302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:38.754 [2024-07-21 01:18:23.985594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77243 ] 00:08:39.012 [2024-07-21 01:18:24.155564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.012 [2024-07-21 01:18:24.219087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.012 [2024-07-21 01:18:24.219282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.012 [2024-07-21 01:18:24.219346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.012 [2024-07-21 01:18:24.219442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.012 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:39.013 01:18:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.387 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.388 00:08:40.388 real 0m1.650s 00:08:40.388 user 0m0.015s 00:08:40.388 sys 0m0.004s 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.388 01:18:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:40.388 ************************************ 00:08:40.388 END TEST accel_decomp_mcore 00:08:40.388 ************************************ 00:08:40.388 01:18:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:40.388 01:18:25 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:40.388 01:18:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.388 01:18:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.388 ************************************ 00:08:40.388 START TEST accel_decomp_full_mcore 00:08:40.388 ************************************ 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:40.388 01:18:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:40.388 [2024-07-21 01:18:25.696616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:40.388 [2024-07-21 01:18:25.696878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77282 ] 00:08:40.647 [2024-07-21 01:18:25.866479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.647 [2024-07-21 01:18:25.936906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.647 [2024-07-21 01:18:25.937065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.647 [2024-07-21 01:18:25.937252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.647 [2024-07-21 01:18:25.937132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.906 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:40.907 01:18:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 ************************************ 00:08:42.284 END TEST accel_decomp_full_mcore 00:08:42.284 ************************************ 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.284 00:08:42.284 real 0m1.647s 00:08:42.284 user 0m0.018s 00:08:42.284 sys 0m0.005s 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.284 01:18:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:42.284 01:18:27 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:42.284 01:18:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:42.284 01:18:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.284 01:18:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.284 ************************************ 00:08:42.284 START TEST accel_decomp_mthread 00:08:42.284 ************************************ 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:42.284 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:42.284 [2024-07-21 01:18:27.417527] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:42.284 [2024-07-21 01:18:27.417816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77320 ] 00:08:42.284 [2024-07-21 01:18:27.589329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.542 [2024-07-21 01:18:27.654005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.542 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.542 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.542 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.542 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:42.543 01:18:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.918 00:08:43.918 real 0m1.624s 00:08:43.918 user 0m1.316s 00:08:43.918 sys 0m0.225s 00:08:43.918 ************************************ 00:08:43.918 END TEST accel_decomp_mthread 00:08:43.918 ************************************ 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.918 01:18:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:43.918 01:18:29 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:43.918 01:18:29 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:43.918 01:18:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.918 01:18:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:43.918 ************************************ 00:08:43.918 START TEST accel_decomp_full_mthread 00:08:43.918 ************************************ 00:08:43.918 01:18:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:43.918 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:43.918 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:43.919 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:43.919 [2024-07-21 01:18:29.114963] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:43.919 [2024-07-21 01:18:29.115074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:08:44.178 [2024-07-21 01:18:29.283450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.178 [2024-07-21 01:18:29.341823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.178 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:44.179 01:18:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.639 00:08:45.639 real 0m1.631s 00:08:45.639 user 0m1.334s 00:08:45.639 sys 0m0.212s 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.639 ************************************ 00:08:45.639 END TEST accel_decomp_full_mthread 00:08:45.639 01:18:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:45.639 ************************************ 00:08:45.639 01:18:30 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:45.639 01:18:30 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:45.639 01:18:30 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:45.639 01:18:30 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:45.639 01:18:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.639 01:18:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.639 01:18:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.639 01:18:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:45.639 01:18:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.639 01:18:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.639 01:18:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.639 01:18:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:45.639 01:18:30 accel -- accel/accel.sh@41 -- # jq -r . 00:08:45.639 ************************************ 00:08:45.639 START TEST accel_dif_functional_tests 00:08:45.639 ************************************ 00:08:45.639 01:18:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:45.639 [2024-07-21 01:18:30.851808] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:45.640 [2024-07-21 01:18:30.851930] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77397 ] 00:08:45.899 [2024-07-21 01:18:31.019871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.899 [2024-07-21 01:18:31.081878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.899 [2024-07-21 01:18:31.081906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.899 [2024-07-21 01:18:31.082015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.899 00:08:45.899 00:08:45.899 CUnit - A unit testing framework for C - Version 2.1-3 00:08:45.899 http://cunit.sourceforge.net/ 00:08:45.899 00:08:45.899 00:08:45.899 Suite: accel_dif 00:08:45.899 Test: verify: DIF generated, GUARD check ...passed 00:08:45.899 Test: verify: DIF generated, APPTAG check ...passed 00:08:45.899 Test: verify: DIF generated, REFTAG check ...passed 00:08:45.899 Test: verify: DIF not generated, GUARD check ...passed 00:08:45.899 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 01:18:31.197149] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:45.899 [2024-07-21 01:18:31.197348] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:45.899 passed 00:08:45.899 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 01:18:31.197519] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:45.899 passed 00:08:45.899 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:45.899 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 01:18:31.197651] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:45.899 passed 00:08:45.899 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:45.899 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:45.899 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:45.899 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 01:18:31.198342] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:45.899 passed 00:08:45.899 Test: verify copy: DIF generated, GUARD check ...passed 00:08:45.899 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:45.899 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:45.899 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 01:18:31.198965] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:45.899 passed 00:08:45.899 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 01:18:31.199174] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:45.899 passed 00:08:45.899 Test: verify copy: DIF not generated, REFTAG check ...passed 00:08:45.899 Test: generate copy: DIF generated, GUARD check ...passed 00:08:45.899 Test: generate copy: DIF generated, APTTAG check ...[2024-07-21 01:18:31.199255] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:45.899 passed 00:08:45.899 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:45.899 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:45.899 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:45.899 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:45.899 Test: generate copy: iovecs-len validate ...passed 00:08:45.899 Test: generate copy: buffer alignment validate ...passed 00:08:45.899 00:08:45.899 Run Summary: Type Total Ran Passed Failed Inactive 00:08:45.899 suites 1 1 n/a 0 0 00:08:45.899 tests 26 26 26 0 0 00:08:45.899 asserts 115 115 115 0 n/a 00:08:45.899 00:08:45.899 Elapsed time = 0.009 seconds 00:08:45.899 [2024-07-21 01:18:31.200232] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:46.466 00:08:46.466 real 0m0.758s 00:08:46.466 user 0m0.974s 00:08:46.466 sys 0m0.281s 00:08:46.466 01:18:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.466 01:18:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:46.466 ************************************ 00:08:46.466 END TEST accel_dif_functional_tests 00:08:46.466 ************************************ 00:08:46.466 00:08:46.466 real 0m38.496s 00:08:46.466 user 0m37.670s 00:08:46.466 sys 0m7.067s 00:08:46.466 01:18:31 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:46.466 01:18:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.466 ************************************ 00:08:46.466 END TEST accel 00:08:46.466 ************************************ 00:08:46.466 01:18:31 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:46.466 01:18:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:46.466 01:18:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.466 01:18:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.466 ************************************ 00:08:46.466 START TEST accel_rpc 00:08:46.466 ************************************ 00:08:46.466 01:18:31 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:46.725 * Looking for test storage... 00:08:46.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:46.725 01:18:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:46.725 01:18:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77466 00:08:46.725 01:18:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:46.725 01:18:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77466 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 77466 ']' 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:46.725 01:18:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.725 [2024-07-21 01:18:31.892446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:46.725 [2024-07-21 01:18:31.892565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77466 ] 00:08:46.984 [2024-07-21 01:18:32.061379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.984 [2024-07-21 01:18:32.123480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.551 01:18:32 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:47.552 01:18:32 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:47.552 01:18:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:47.552 01:18:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:47.552 01:18:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:47.552 01:18:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:47.552 01:18:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:47.552 01:18:32 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:47.552 01:18:32 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.552 01:18:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.552 ************************************ 00:08:47.552 START TEST accel_assign_opcode 00:08:47.552 ************************************ 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:47.552 [2024-07-21 01:18:32.679510] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:47.552 [2024-07-21 01:18:32.691465] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.552 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:47.810 01:18:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:47.810 01:18:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.810 software 00:08:47.810 00:08:47.810 real 0m0.356s 00:08:47.810 user 0m0.047s 00:08:47.810 sys 0m0.018s 00:08:47.810 01:18:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.810 01:18:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:47.810 ************************************ 00:08:47.810 END TEST accel_assign_opcode 00:08:47.810 ************************************ 00:08:47.810 01:18:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77466 00:08:47.810 01:18:33 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 77466 ']' 00:08:47.810 01:18:33 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 77466 00:08:47.810 01:18:33 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:47.810 01:18:33 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:47.810 01:18:33 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77466 00:08:48.069 01:18:33 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:48.069 01:18:33 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:48.069 killing process with pid 77466 00:08:48.069 01:18:33 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77466' 00:08:48.069 01:18:33 accel_rpc -- common/autotest_common.sh@965 -- # kill 77466 00:08:48.069 01:18:33 accel_rpc -- common/autotest_common.sh@970 -- # wait 77466 00:08:48.635 00:08:48.635 real 0m2.062s 00:08:48.635 user 0m1.801s 00:08:48.635 sys 0m0.677s 00:08:48.635 01:18:33 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:48.635 01:18:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.635 ************************************ 00:08:48.635 END TEST accel_rpc 00:08:48.635 ************************************ 00:08:48.635 01:18:33 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:48.635 01:18:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:48.635 01:18:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:48.635 01:18:33 -- common/autotest_common.sh@10 -- # set +x 00:08:48.635 ************************************ 00:08:48.635 START TEST app_cmdline 00:08:48.635 ************************************ 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:48.635 * Looking for test storage... 00:08:48.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:48.635 01:18:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:48.635 01:18:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77563 00:08:48.635 01:18:33 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:48.635 01:18:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77563 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77563 ']' 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.635 01:18:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:48.894 [2024-07-21 01:18:34.030360] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:48.894 [2024-07-21 01:18:34.030475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77563 ] 00:08:48.894 [2024-07-21 01:18:34.199381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.152 [2024-07-21 01:18:34.262491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.718 01:18:34 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:49.718 01:18:34 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:49.718 { 00:08:49.718 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:49.718 "fields": { 00:08:49.718 "major": 24, 00:08:49.718 "minor": 5, 00:08:49.718 "patch": 1, 00:08:49.718 "suffix": "-pre", 00:08:49.718 "commit": "5fa2f5086" 00:08:49.718 } 00:08:49.718 } 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:49.718 01:18:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.718 01:18:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:49.718 01:18:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:49.718 01:18:35 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.977 01:18:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:49.977 01:18:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:49.977 01:18:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:49.977 request: 00:08:49.977 { 00:08:49.977 "method": "env_dpdk_get_mem_stats", 00:08:49.977 "req_id": 1 00:08:49.977 } 00:08:49.977 Got JSON-RPC error response 00:08:49.977 response: 00:08:49.977 { 00:08:49.977 "code": -32601, 00:08:49.977 "message": "Method not found" 00:08:49.977 } 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:49.977 01:18:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77563 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77563 ']' 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77563 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77563 00:08:49.977 killing process with pid 77563 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77563' 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@965 -- # kill 77563 00:08:49.977 01:18:35 app_cmdline -- common/autotest_common.sh@970 -- # wait 77563 00:08:50.558 00:08:50.558 real 0m2.047s 00:08:50.558 user 0m2.060s 00:08:50.558 sys 0m0.692s 00:08:50.558 01:18:35 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.558 01:18:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 ************************************ 00:08:50.558 END TEST app_cmdline 00:08:50.558 ************************************ 00:08:50.817 01:18:35 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:50.817 01:18:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:50.817 01:18:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.817 01:18:35 -- common/autotest_common.sh@10 -- # set +x 00:08:50.817 ************************************ 00:08:50.817 START TEST version 00:08:50.817 ************************************ 00:08:50.817 01:18:35 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:50.817 * Looking for test storage... 00:08:50.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:50.817 01:18:36 version -- app/version.sh@17 -- # get_header_version major 00:08:50.817 01:18:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # cut -f2 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:50.817 01:18:36 version -- app/version.sh@17 -- # major=24 00:08:50.817 01:18:36 version -- app/version.sh@18 -- # get_header_version minor 00:08:50.817 01:18:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # cut -f2 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:50.817 01:18:36 version -- app/version.sh@18 -- # minor=5 00:08:50.817 01:18:36 version -- app/version.sh@19 -- # get_header_version patch 00:08:50.817 01:18:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # cut -f2 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:50.817 01:18:36 version -- app/version.sh@19 -- # patch=1 00:08:50.817 01:18:36 version -- app/version.sh@20 -- # get_header_version suffix 00:08:50.817 01:18:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # cut -f2 00:08:50.817 01:18:36 version -- app/version.sh@14 -- # tr -d '"' 00:08:50.817 01:18:36 version -- app/version.sh@20 -- # suffix=-pre 00:08:50.817 01:18:36 version -- app/version.sh@22 -- # version=24.5 00:08:50.817 01:18:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:50.817 01:18:36 version -- app/version.sh@25 -- # version=24.5.1 00:08:50.817 01:18:36 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:50.817 01:18:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:50.817 01:18:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:51.076 01:18:36 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:51.076 01:18:36 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:51.076 00:08:51.076 real 0m0.222s 00:08:51.076 user 0m0.106s 00:08:51.076 sys 0m0.170s 00:08:51.076 ************************************ 00:08:51.076 END TEST version 00:08:51.076 ************************************ 00:08:51.076 01:18:36 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:51.076 01:18:36 version -- common/autotest_common.sh@10 -- # set +x 00:08:51.076 01:18:36 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:51.076 01:18:36 -- spdk/autotest.sh@198 -- # uname -s 00:08:51.076 01:18:36 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:51.076 01:18:36 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:51.076 01:18:36 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:51.076 01:18:36 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:51.076 01:18:36 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:51.076 01:18:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:51.076 01:18:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.076 01:18:36 -- common/autotest_common.sh@10 -- # set +x 00:08:51.076 ************************************ 00:08:51.076 START TEST blockdev_nvme 00:08:51.076 ************************************ 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:51.076 * Looking for test storage... 00:08:51.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:51.076 01:18:36 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77708 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77708 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77708 ']' 00:08:51.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.076 01:18:36 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:51.076 01:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.335 [2024-07-21 01:18:36.469365] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:51.335 [2024-07-21 01:18:36.469798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77708 ] 00:08:51.335 [2024-07-21 01:18:36.636811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.594 [2024-07-21 01:18:36.699988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.162 01:18:37 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:52.162 01:18:37 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.162 01:18:37 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:52.162 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.162 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.420 01:18:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.420 01:18:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:52.679 01:18:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "10391d0f-58ff-4e95-93cb-ba1e2b66a9d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "10391d0f-58ff-4e95-93cb-ba1e2b66a9d6",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ecad6a14-f7fc-4566-96d1-f9b0b725ef52"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ecad6a14-f7fc-4566-96d1-f9b0b725ef52",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "12661431-e5e0-4206-ba8f-4fe4356c25e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "12661431-e5e0-4206-ba8f-4fe4356c25e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1a0cf959-a381-490f-87ba-d7f71db076a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a0cf959-a381-490f-87ba-d7f71db076a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0f284eee-f8d0-40cb-9394-37b643679eb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f284eee-f8d0-40cb-9394-37b643679eb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f110d6df-beb1-433d-a67e-5ce86de7b8d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f110d6df-beb1-433d-a67e-5ce86de7b8d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:52.680 01:18:37 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77708 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77708 ']' 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77708 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77708 00:08:52.680 killing process with pid 77708 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77708' 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77708 00:08:52.680 01:18:37 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77708 00:08:53.247 01:18:38 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:53.247 01:18:38 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:53.247 01:18:38 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:53.247 01:18:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.247 01:18:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.247 ************************************ 00:08:53.247 START TEST bdev_hello_world 00:08:53.247 ************************************ 00:08:53.247 01:18:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:53.506 [2024-07-21 01:18:38.594114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:53.506 [2024-07-21 01:18:38.594258] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77781 ] 00:08:53.506 [2024-07-21 01:18:38.765575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.764 [2024-07-21 01:18:38.828921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.022 [2024-07-21 01:18:39.248701] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:54.022 [2024-07-21 01:18:39.248750] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:54.022 [2024-07-21 01:18:39.248771] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:54.022 [2024-07-21 01:18:39.251223] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:54.022 [2024-07-21 01:18:39.251728] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:54.022 [2024-07-21 01:18:39.251768] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:54.022 [2024-07-21 01:18:39.252263] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:54.022 00:08:54.022 [2024-07-21 01:18:39.252292] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:54.280 00:08:54.280 real 0m1.079s 00:08:54.280 user 0m0.690s 00:08:54.280 sys 0m0.286s 00:08:54.280 01:18:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:54.280 ************************************ 00:08:54.280 END TEST bdev_hello_world 00:08:54.280 01:18:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:54.280 ************************************ 00:08:54.539 01:18:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:54.539 01:18:39 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:54.539 01:18:39 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:54.539 01:18:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.539 ************************************ 00:08:54.539 START TEST bdev_bounds 00:08:54.539 ************************************ 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:54.539 Process bdevio pid: 77812 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77812 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77812' 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77812 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77812 ']' 00:08:54.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:54.539 01:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:54.539 [2024-07-21 01:18:39.749192] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:54.539 [2024-07-21 01:18:39.749347] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77812 ] 00:08:54.797 [2024-07-21 01:18:39.923401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.797 [2024-07-21 01:18:39.988163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.797 [2024-07-21 01:18:39.988295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.797 [2024-07-21 01:18:39.988370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.363 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:55.363 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:55.363 01:18:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:55.363 I/O targets: 00:08:55.363 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:55.363 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:55.363 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.363 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.363 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:55.363 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:55.363 00:08:55.363 00:08:55.363 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.363 http://cunit.sourceforge.net/ 00:08:55.363 00:08:55.363 00:08:55.363 Suite: bdevio tests on: Nvme3n1 00:08:55.363 Test: blockdev write read block ...passed 00:08:55.363 Test: blockdev write zeroes read block ...passed 00:08:55.363 Test: blockdev write zeroes read no split ...passed 00:08:55.363 Test: blockdev write zeroes read split ...passed 00:08:55.363 Test: blockdev write zeroes read split partial ...passed 00:08:55.363 Test: blockdev reset ...[2024-07-21 01:18:40.614935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:55.363 [2024-07-21 01:18:40.617379] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.363 passed 00:08:55.363 Test: blockdev write read 8 blocks ...passed 00:08:55.363 Test: blockdev write read size > 128k ...passed 00:08:55.363 Test: blockdev write read invalid size ...passed 00:08:55.363 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.363 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.363 Test: blockdev write read max offset ...passed 00:08:55.363 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.363 Test: blockdev writev readv 8 blocks ...passed 00:08:55.363 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.363 Test: blockdev writev readv block ...passed 00:08:55.363 Test: blockdev writev readv size > 128k ...passed 00:08:55.363 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.363 Test: blockdev comparev and writev ...[2024-07-21 01:18:40.626116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c480e000 len:0x1000 00:08:55.363 [2024-07-21 01:18:40.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.363 passed 00:08:55.363 Test: blockdev nvme passthru rw ...passed 00:08:55.363 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.363 Test: blockdev nvme admin passthru ...[2024-07-21 01:18:40.627128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.363 [2024-07-21 01:18:40.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.363 passed 00:08:55.363 Test: blockdev copy ...passed 00:08:55.363 Suite: bdevio tests on: Nvme2n3 00:08:55.363 Test: blockdev write read block ...passed 00:08:55.363 Test: blockdev write zeroes read block ...passed 00:08:55.363 Test: blockdev write zeroes read no split ...passed 00:08:55.363 Test: blockdev write zeroes read split ...passed 00:08:55.363 Test: blockdev write zeroes read split partial ...passed 00:08:55.363 Test: blockdev reset ...[2024-07-21 01:18:40.652074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.363 passed 00:08:55.363 Test: blockdev write read 8 blocks ...[2024-07-21 01:18:40.654482] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.363 passed 00:08:55.363 Test: blockdev write read size > 128k ...passed 00:08:55.363 Test: blockdev write read invalid size ...passed 00:08:55.363 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.363 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.363 Test: blockdev write read max offset ...passed 00:08:55.363 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.363 Test: blockdev writev readv 8 blocks ...passed 00:08:55.363 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.363 Test: blockdev writev readv block ...passed 00:08:55.363 Test: blockdev writev readv size > 128k ...passed 00:08:55.363 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.363 Test: blockdev comparev and writev ...[2024-07-21 01:18:40.662251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4808000 len:0x1000 00:08:55.363 [2024-07-21 01:18:40.662301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.363 passed 00:08:55.363 Test: blockdev nvme passthru rw ...passed 00:08:55.363 Test: blockdev nvme passthru vendor specific ...[2024-07-21 01:18:40.663443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.363 [2024-07-21 01:18:40.663476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.363 passed 00:08:55.363 Test: blockdev nvme admin passthru ...passed 00:08:55.363 Test: blockdev copy ...passed 00:08:55.363 Suite: bdevio tests on: Nvme2n2 00:08:55.622 Test: blockdev write read block ...passed 00:08:55.622 Test: blockdev write zeroes read block ...passed 00:08:55.622 Test: blockdev write zeroes read no split ...passed 00:08:55.622 Test: blockdev write zeroes read split ...passed 00:08:55.622 Test: blockdev write zeroes read split partial ...passed 00:08:55.622 Test: blockdev reset ...[2024-07-21 01:18:40.692126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.622 [2024-07-21 01:18:40.694785] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.622 passed 00:08:55.622 Test: blockdev write read 8 blocks ...passed 00:08:55.622 Test: blockdev write read size > 128k ...passed 00:08:55.622 Test: blockdev write read invalid size ...passed 00:08:55.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.622 Test: blockdev write read max offset ...passed 00:08:55.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.622 Test: blockdev writev readv 8 blocks ...passed 00:08:55.622 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.622 Test: blockdev writev readv block ...passed 00:08:55.622 Test: blockdev writev readv size > 128k ...passed 00:08:55.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.623 Test: blockdev comparev and writev ...[2024-07-21 01:18:40.703798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4804000 len:0x1000 00:08:55.623 [2024-07-21 01:18:40.703861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme passthru rw ...passed 00:08:55.623 Test: blockdev nvme passthru vendor specific ...[2024-07-21 01:18:40.704850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.623 [2024-07-21 01:18:40.704882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme admin passthru ...passed 00:08:55.623 Test: blockdev copy ...passed 00:08:55.623 Suite: bdevio tests on: Nvme2n1 00:08:55.623 Test: blockdev write read block ...passed 00:08:55.623 Test: blockdev write zeroes read block ...passed 00:08:55.623 Test: blockdev write zeroes read no split ...passed 00:08:55.623 Test: blockdev write zeroes read split ...passed 00:08:55.623 Test: blockdev write zeroes read split partial ...passed 00:08:55.623 Test: blockdev reset ...[2024-07-21 01:18:40.732845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:55.623 [2024-07-21 01:18:40.735225] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.623 passed 00:08:55.623 Test: blockdev write read 8 blocks ...passed 00:08:55.623 Test: blockdev write read size > 128k ...passed 00:08:55.623 Test: blockdev write read invalid size ...passed 00:08:55.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.623 Test: blockdev write read max offset ...passed 00:08:55.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.623 Test: blockdev writev readv 8 blocks ...passed 00:08:55.623 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.623 Test: blockdev writev readv block ...passed 00:08:55.623 Test: blockdev writev readv size > 128k ...passed 00:08:55.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.623 Test: blockdev comparev and writev ...[2024-07-21 01:18:40.744005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4804000 len:0x1000 00:08:55.623 [2024-07-21 01:18:40.744054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme passthru rw ...passed 00:08:55.623 Test: blockdev nvme passthru vendor specific ...[2024-07-21 01:18:40.745075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.623 [2024-07-21 01:18:40.745109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme admin passthru ...passed 00:08:55.623 Test: blockdev copy ...passed 00:08:55.623 Suite: bdevio tests on: Nvme1n1 00:08:55.623 Test: blockdev write read block ...passed 00:08:55.623 Test: blockdev write zeroes read block ...passed 00:08:55.623 Test: blockdev write zeroes read no split ...passed 00:08:55.623 Test: blockdev write zeroes read split ...passed 00:08:55.623 Test: blockdev write zeroes read split partial ...passed 00:08:55.623 Test: blockdev reset ...[2024-07-21 01:18:40.773111] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:55.623 passed 00:08:55.623 Test: blockdev write read 8 blocks ...[2024-07-21 01:18:40.775401] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.623 passed 00:08:55.623 Test: blockdev write read size > 128k ...passed 00:08:55.623 Test: blockdev write read invalid size ...passed 00:08:55.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.623 Test: blockdev write read max offset ...passed 00:08:55.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.623 Test: blockdev writev readv 8 blocks ...passed 00:08:55.623 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.623 Test: blockdev writev readv block ...passed 00:08:55.623 Test: blockdev writev readv size > 128k ...passed 00:08:55.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.623 Test: blockdev comparev and writev ...[2024-07-21 01:18:40.783614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0e0e000 len:0x1000 00:08:55.623 [2024-07-21 01:18:40.783664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme passthru rw ...passed 00:08:55.623 Test: blockdev nvme passthru vendor specific ...passed 00:08:55.623 Test: blockdev nvme admin passthru ...[2024-07-21 01:18:40.784762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:55.623 [2024-07-21 01:18:40.784795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev copy ...passed 00:08:55.623 Suite: bdevio tests on: Nvme0n1 00:08:55.623 Test: blockdev write read block ...passed 00:08:55.623 Test: blockdev write zeroes read block ...passed 00:08:55.623 Test: blockdev write zeroes read no split ...passed 00:08:55.623 Test: blockdev write zeroes read split ...passed 00:08:55.623 Test: blockdev write zeroes read split partial ...passed 00:08:55.623 Test: blockdev reset ...[2024-07-21 01:18:40.814331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:55.623 passed 00:08:55.623 Test: blockdev write read 8 blocks ...[2024-07-21 01:18:40.816614] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:55.623 passed 00:08:55.623 Test: blockdev write read size > 128k ...passed 00:08:55.623 Test: blockdev write read invalid size ...passed 00:08:55.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:55.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:55.623 Test: blockdev write read max offset ...passed 00:08:55.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:55.623 Test: blockdev writev readv 8 blocks ...passed 00:08:55.623 Test: blockdev writev readv 30 x 1block ...passed 00:08:55.623 Test: blockdev writev readv block ...passed 00:08:55.623 Test: blockdev writev readv size > 128k ...passed 00:08:55.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:55.623 Test: blockdev comparev and writev ...passed 00:08:55.623 Test: blockdev nvme passthru rw ...[2024-07-21 01:18:40.823365] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:55.623 separate metadata which is not supported yet. 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme passthru vendor specific ...[2024-07-21 01:18:40.824091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:55.623 [2024-07-21 01:18:40.824132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:55.623 passed 00:08:55.623 Test: blockdev nvme admin passthru ...passed 00:08:55.623 Test: blockdev copy ...passed 00:08:55.623 00:08:55.623 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.623 suites 6 6 n/a 0 0 00:08:55.623 tests 138 138 138 0 0 00:08:55.623 asserts 893 893 893 0 n/a 00:08:55.623 00:08:55.623 Elapsed time = 0.518 seconds 00:08:55.623 0 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77812 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77812 ']' 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77812 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77812 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77812' 00:08:55.623 killing process with pid 77812 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77812 00:08:55.623 01:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77812 00:08:55.883 01:18:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:55.883 00:08:55.883 real 0m1.492s 00:08:55.883 user 0m3.270s 00:08:55.883 sys 0m0.442s 00:08:55.883 01:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:55.883 01:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:55.883 ************************************ 00:08:55.883 END TEST bdev_bounds 00:08:55.883 ************************************ 00:08:56.142 01:18:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:56.142 01:18:41 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:56.142 01:18:41 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:56.142 01:18:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.142 ************************************ 00:08:56.142 START TEST bdev_nbd 00:08:56.142 ************************************ 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77866 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77866 /var/tmp/spdk-nbd.sock 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77866 ']' 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:56.142 01:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:56.142 [2024-07-21 01:18:41.327100] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:56.142 [2024-07-21 01:18:41.327390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.401 [2024-07-21 01:18:41.501201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.401 [2024-07-21 01:18:41.563251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:56.968 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.227 1+0 records in 00:08:57.227 1+0 records out 00:08:57.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653639 s, 6.3 MB/s 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.227 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.486 1+0 records in 00:08:57.486 1+0 records out 00:08:57.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617095 s, 6.6 MB/s 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.486 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.745 1+0 records in 00:08:57.745 1+0 records out 00:08:57.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648042 s, 6.3 MB/s 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.745 01:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:57.745 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.003 1+0 records in 00:08:58.003 1+0 records out 00:08:58.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00088517 s, 4.6 MB/s 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:58.003 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.004 1+0 records in 00:08:58.004 1+0 records out 00:08:58.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615598 s, 6.7 MB/s 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.004 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:58.262 1+0 records in 00:08:58.262 1+0 records out 00:08:58.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737755 s, 5.6 MB/s 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:58.262 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd0", 00:08:58.521 "bdev_name": "Nvme0n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd1", 00:08:58.521 "bdev_name": "Nvme1n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd2", 00:08:58.521 "bdev_name": "Nvme2n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd3", 00:08:58.521 "bdev_name": "Nvme2n2" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd4", 00:08:58.521 "bdev_name": "Nvme2n3" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd5", 00:08:58.521 "bdev_name": "Nvme3n1" 00:08:58.521 } 00:08:58.521 ]' 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd0", 00:08:58.521 "bdev_name": "Nvme0n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd1", 00:08:58.521 "bdev_name": "Nvme1n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd2", 00:08:58.521 "bdev_name": "Nvme2n1" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd3", 00:08:58.521 "bdev_name": "Nvme2n2" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd4", 00:08:58.521 "bdev_name": "Nvme2n3" 00:08:58.521 }, 00:08:58.521 { 00:08:58.521 "nbd_device": "/dev/nbd5", 00:08:58.521 "bdev_name": "Nvme3n1" 00:08:58.521 } 00:08:58.521 ]' 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.521 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.780 01:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.039 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.298 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.557 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.816 01:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:00.074 /dev/nbd0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:00.074 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.074 1+0 records in 00:09:00.074 1+0 records out 00:09:00.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720138 s, 5.7 MB/s 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:00.333 /dev/nbd1 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.333 1+0 records in 00:09:00.333 1+0 records out 00:09:00.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917872 s, 4.5 MB/s 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.333 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:00.591 /dev/nbd10 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.592 1+0 records in 00:09:00.592 1+0 records out 00:09:00.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000684609 s, 6.0 MB/s 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.592 01:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:00.850 /dev/nbd11 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.850 1+0 records in 00:09:00.850 1+0 records out 00:09:00.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882247 s, 4.6 MB/s 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.850 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:01.108 /dev/nbd12 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.108 1+0 records in 00:09:01.108 1+0 records out 00:09:01.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633328 s, 6.5 MB/s 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:01.108 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:01.367 /dev/nbd13 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:01.367 1+0 records in 00:09:01.367 1+0 records out 00:09:01.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507231 s, 8.1 MB/s 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd0", 00:09:01.367 "bdev_name": "Nvme0n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd1", 00:09:01.367 "bdev_name": "Nvme1n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd10", 00:09:01.367 "bdev_name": "Nvme2n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd11", 00:09:01.367 "bdev_name": "Nvme2n2" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd12", 00:09:01.367 "bdev_name": "Nvme2n3" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd13", 00:09:01.367 "bdev_name": "Nvme3n1" 00:09:01.367 } 00:09:01.367 ]' 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.367 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd0", 00:09:01.367 "bdev_name": "Nvme0n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd1", 00:09:01.367 "bdev_name": "Nvme1n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd10", 00:09:01.367 "bdev_name": "Nvme2n1" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd11", 00:09:01.367 "bdev_name": "Nvme2n2" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd12", 00:09:01.367 "bdev_name": "Nvme2n3" 00:09:01.367 }, 00:09:01.367 { 00:09:01.367 "nbd_device": "/dev/nbd13", 00:09:01.367 "bdev_name": "Nvme3n1" 00:09:01.367 } 00:09:01.367 ]' 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:01.626 /dev/nbd1 00:09:01.626 /dev/nbd10 00:09:01.626 /dev/nbd11 00:09:01.626 /dev/nbd12 00:09:01.626 /dev/nbd13' 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:01.626 /dev/nbd1 00:09:01.626 /dev/nbd10 00:09:01.626 /dev/nbd11 00:09:01.626 /dev/nbd12 00:09:01.626 /dev/nbd13' 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:01.626 256+0 records in 00:09:01.626 256+0 records out 00:09:01.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125424 s, 83.6 MB/s 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:01.626 256+0 records in 00:09:01.626 256+0 records out 00:09:01.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124509 s, 8.4 MB/s 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.626 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:01.884 256+0 records in 00:09:01.884 256+0 records out 00:09:01.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126229 s, 8.3 MB/s 00:09:01.884 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.884 01:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:01.884 256+0 records in 00:09:01.884 256+0 records out 00:09:01.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126695 s, 8.3 MB/s 00:09:01.884 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.884 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:02.142 256+0 records in 00:09:02.142 256+0 records out 00:09:02.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129542 s, 8.1 MB/s 00:09:02.142 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.142 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:02.142 256+0 records in 00:09:02.142 256+0 records out 00:09:02.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130481 s, 8.0 MB/s 00:09:02.142 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.142 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:02.401 256+0 records in 00:09:02.401 256+0 records out 00:09:02.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12773 s, 8.2 MB/s 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.401 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.710 01:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.977 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.236 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.494 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:03.753 01:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:04.012 malloc_lvol_verify 00:09:04.012 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:04.271 03b2c540-2671-4d5d-930b-025077c8295e 00:09:04.271 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:04.530 260345a8-dccd-4294-8c58-c8946b015c7f 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:04.530 /dev/nbd0 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:04.530 mke2fs 1.46.5 (30-Dec-2021) 00:09:04.530 Discarding device blocks: 0/4096 done 00:09:04.530 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:04.530 00:09:04.530 Allocating group tables: 0/1 done 00:09:04.530 Writing inode tables: 0/1 done 00:09:04.530 Creating journal (1024 blocks): done 00:09:04.530 Writing superblocks and filesystem accounting information: 0/1 done 00:09:04.530 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.530 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77866 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77866 ']' 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77866 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:04.790 01:18:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77866 00:09:04.790 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:04.790 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:04.790 killing process with pid 77866 00:09:04.790 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77866' 00:09:04.790 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77866 00:09:04.790 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77866 00:09:05.358 01:18:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:05.358 00:09:05.358 real 0m9.147s 00:09:05.358 user 0m11.981s 00:09:05.358 sys 0m4.098s 00:09:05.358 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:05.358 01:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 ************************************ 00:09:05.358 END TEST bdev_nbd 00:09:05.358 ************************************ 00:09:05.358 01:18:50 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:05.358 01:18:50 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:05.359 skipping fio tests on NVMe due to multi-ns failures. 00:09:05.359 01:18:50 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:05.359 01:18:50 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:05.359 01:18:50 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.359 01:18:50 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:05.359 01:18:50 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.359 01:18:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.359 ************************************ 00:09:05.359 START TEST bdev_verify 00:09:05.359 ************************************ 00:09:05.359 01:18:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.359 [2024-07-21 01:18:50.530903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:05.359 [2024-07-21 01:18:50.531044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78233 ] 00:09:05.617 [2024-07-21 01:18:50.697263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.617 [2024-07-21 01:18:50.758769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.617 [2024-07-21 01:18:50.758792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.184 Running I/O for 5 seconds... 00:09:11.448 00:09:11.448 Latency(us) 00:09:11.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.448 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0xbd0bd 00:09:11.448 Nvme0n1 : 5.07 1844.81 7.21 0.00 0.00 69262.56 9738.28 62746.11 00:09:11.448 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:11.448 Nvme0n1 : 5.06 1820.46 7.11 0.00 0.00 70121.10 16423.48 82117.40 00:09:11.448 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0xa0000 00:09:11.448 Nvme1n1 : 5.07 1843.99 7.20 0.00 0.00 69199.06 10948.99 64430.57 00:09:11.448 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0xa0000 length 0xa0000 00:09:11.448 Nvme1n1 : 5.06 1819.94 7.11 0.00 0.00 69961.53 18107.94 69905.07 00:09:11.448 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0x80000 00:09:11.448 Nvme2n1 : 5.07 1843.15 7.20 0.00 0.00 69126.17 11949.13 64009.46 00:09:11.448 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x80000 length 0x80000 00:09:11.448 Nvme2n1 : 5.07 1819.49 7.11 0.00 0.00 69804.22 16528.76 61903.88 00:09:11.448 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0x80000 00:09:11.448 Nvme2n2 : 5.07 1842.70 7.20 0.00 0.00 68975.50 12054.41 63167.23 00:09:11.448 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x80000 length 0x80000 00:09:11.448 Nvme2n2 : 5.07 1818.69 7.10 0.00 0.00 69682.55 16528.76 59798.31 00:09:11.448 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0x80000 00:09:11.448 Nvme2n3 : 5.07 1842.26 7.20 0.00 0.00 68896.76 11054.27 61061.65 00:09:11.448 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x80000 length 0x80000 00:09:11.448 Nvme2n3 : 5.07 1828.69 7.14 0.00 0.00 69179.41 3013.60 61903.88 00:09:11.448 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x0 length 0x20000 00:09:11.448 Nvme3n1 : 5.07 1841.81 7.19 0.00 0.00 68817.82 10317.31 60219.42 00:09:11.448 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:11.448 Verification LBA range: start 0x20000 length 0x20000 00:09:11.448 Nvme3n1 : 5.08 1828.28 7.14 0.00 0.00 69109.94 3132.04 64009.46 00:09:11.448 =================================================================================================================== 00:09:11.448 Total : 21994.26 85.92 0.00 0.00 69342.43 3013.60 82117.40 00:09:11.706 00:09:11.706 real 0m6.364s 00:09:11.706 user 0m11.768s 00:09:11.706 sys 0m0.366s 00:09:11.706 01:18:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:11.706 01:18:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:11.706 ************************************ 00:09:11.706 END TEST bdev_verify 00:09:11.706 ************************************ 00:09:11.706 01:18:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:11.706 01:18:56 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:11.706 01:18:56 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:11.706 01:18:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.706 ************************************ 00:09:11.706 START TEST bdev_verify_big_io 00:09:11.706 ************************************ 00:09:11.706 01:18:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:11.706 [2024-07-21 01:18:56.962772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:11.706 [2024-07-21 01:18:56.962898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78319 ] 00:09:11.965 [2024-07-21 01:18:57.118873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.965 [2024-07-21 01:18:57.160933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.965 [2024-07-21 01:18:57.161024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.532 Running I/O for 5 seconds... 00:09:19.100 00:09:19.100 Latency(us) 00:09:19.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0xbd0b 00:09:19.100 Nvme0n1 : 5.51 164.41 10.28 0.00 0.00 731339.10 14317.91 869181.07 00:09:19.100 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:19.100 Nvme0n1 : 5.54 184.68 11.54 0.00 0.00 684280.67 20213.51 677152.69 00:09:19.100 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0xa000 00:09:19.100 Nvme1n1 : 5.61 178.49 11.16 0.00 0.00 664464.72 26319.68 609774.32 00:09:19.100 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0xa000 length 0xa000 00:09:19.100 Nvme1n1 : 5.54 181.93 11.37 0.00 0.00 681495.23 29267.48 640094.59 00:09:19.100 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0x8000 00:09:19.100 Nvme2n1 : 5.62 182.34 11.40 0.00 0.00 636821.44 24319.38 609774.32 00:09:19.100 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x8000 length 0x8000 00:09:19.100 Nvme2n1 : 5.54 181.34 11.33 0.00 0.00 672615.86 27161.91 650201.34 00:09:19.100 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0x8000 00:09:19.100 Nvme2n2 : 5.64 185.32 11.58 0.00 0.00 613050.10 20634.63 613143.24 00:09:19.100 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x8000 length 0x8000 00:09:19.100 Nvme2n2 : 5.54 181.66 11.35 0.00 0.00 660504.13 28004.14 670414.86 00:09:19.100 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0x8000 00:09:19.100 Nvme2n3 : 5.70 192.65 12.04 0.00 0.00 581455.49 15897.09 1394732.41 00:09:19.100 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x8000 length 0x8000 00:09:19.100 Nvme2n3 : 5.54 181.07 11.32 0.00 0.00 646798.87 28425.25 683890.53 00:09:19.100 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x0 length 0x2000 00:09:19.100 Nvme3n1 : 5.78 253.74 15.86 0.00 0.00 432880.48 315.84 936559.45 00:09:19.100 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:19.100 Verification LBA range: start 0x2000 length 0x2000 00:09:19.100 Nvme3n1 : 5.55 195.96 12.25 0.00 0.00 588025.84 3145.20 693997.29 00:09:19.100 =================================================================================================================== 00:09:19.100 Total : 2263.59 141.47 0.00 0.00 624124.44 315.84 1394732.41 00:09:19.100 00:09:19.100 real 0m7.352s 00:09:19.100 user 0m13.787s 00:09:19.100 sys 0m0.314s 00:09:19.100 01:19:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.100 01:19:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:19.100 ************************************ 00:09:19.100 END TEST bdev_verify_big_io 00:09:19.100 ************************************ 00:09:19.100 01:19:04 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.100 01:19:04 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:19.100 01:19:04 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.100 01:19:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.100 ************************************ 00:09:19.100 START TEST bdev_write_zeroes 00:09:19.100 ************************************ 00:09:19.100 01:19:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:19.100 [2024-07-21 01:19:04.410502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:19.100 [2024-07-21 01:19:04.410624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78413 ] 00:09:19.358 [2024-07-21 01:19:04.581417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.358 [2024-07-21 01:19:04.642951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.925 Running I/O for 1 seconds... 00:09:20.857 00:09:20.857 Latency(us) 00:09:20.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.857 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme0n1 : 1.01 13631.39 53.25 0.00 0.00 9363.68 7790.62 20424.07 00:09:20.857 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme1n1 : 1.01 13618.06 53.20 0.00 0.00 9360.64 8159.10 20108.23 00:09:20.857 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme2n1 : 1.01 13639.24 53.28 0.00 0.00 9341.15 6658.88 19055.45 00:09:20.857 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme2n2 : 1.01 13626.04 53.23 0.00 0.00 9334.67 6843.12 18529.05 00:09:20.857 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme2n3 : 1.02 13657.23 53.35 0.00 0.00 9268.76 4316.43 16634.04 00:09:20.857 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.857 Nvme3n1 : 1.02 13645.93 53.30 0.00 0.00 9262.76 4342.75 16844.59 00:09:20.857 =================================================================================================================== 00:09:20.857 Total : 81817.90 319.60 0.00 0.00 9321.79 4316.43 20424.07 00:09:21.424 00:09:21.424 real 0m2.133s 00:09:21.424 user 0m1.696s 00:09:21.424 sys 0m0.329s 00:09:21.424 01:19:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.424 01:19:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:21.424 ************************************ 00:09:21.424 END TEST bdev_write_zeroes 00:09:21.424 ************************************ 00:09:21.424 01:19:06 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.424 01:19:06 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:21.424 01:19:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.424 01:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.424 ************************************ 00:09:21.424 START TEST bdev_json_nonenclosed 00:09:21.424 ************************************ 00:09:21.424 01:19:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.424 [2024-07-21 01:19:06.607018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:21.424 [2024-07-21 01:19:06.607141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78455 ] 00:09:21.683 [2024-07-21 01:19:06.778047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.683 [2024-07-21 01:19:06.839369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.683 [2024-07-21 01:19:06.839495] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:21.683 [2024-07-21 01:19:06.839531] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:21.683 [2024-07-21 01:19:06.839545] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.683 00:09:21.683 real 0m0.464s 00:09:21.683 user 0m0.202s 00:09:21.683 sys 0m0.158s 00:09:21.683 01:19:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.683 ************************************ 00:09:21.683 END TEST bdev_json_nonenclosed 00:09:21.683 ************************************ 00:09:21.683 01:19:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:21.942 01:19:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.942 01:19:07 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:21.942 01:19:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.942 01:19:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.942 ************************************ 00:09:21.942 START TEST bdev_json_nonarray 00:09:21.942 ************************************ 00:09:21.942 01:19:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.942 [2024-07-21 01:19:07.135864] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:21.942 [2024-07-21 01:19:07.136008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78486 ] 00:09:22.201 [2024-07-21 01:19:07.308067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.201 [2024-07-21 01:19:07.369899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.201 [2024-07-21 01:19:07.370049] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:22.202 [2024-07-21 01:19:07.370081] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:22.202 [2024-07-21 01:19:07.370102] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.202 00:09:22.202 real 0m0.462s 00:09:22.202 user 0m0.199s 00:09:22.202 sys 0m0.159s 00:09:22.202 01:19:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.468 01:19:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:22.468 ************************************ 00:09:22.468 END TEST bdev_json_nonarray 00:09:22.468 ************************************ 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:22.468 01:19:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:22.468 00:09:22.468 real 0m31.360s 00:09:22.468 user 0m45.890s 00:09:22.468 sys 0m7.333s 00:09:22.468 01:19:07 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.468 01:19:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.468 ************************************ 00:09:22.468 END TEST blockdev_nvme 00:09:22.468 ************************************ 00:09:22.468 01:19:07 -- spdk/autotest.sh@213 -- # uname -s 00:09:22.468 01:19:07 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:09:22.468 01:19:07 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:22.468 01:19:07 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:22.468 01:19:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.468 01:19:07 -- common/autotest_common.sh@10 -- # set +x 00:09:22.468 ************************************ 00:09:22.468 START TEST blockdev_nvme_gpt 00:09:22.468 ************************************ 00:09:22.468 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:22.468 * Looking for test storage... 00:09:22.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:22.468 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:22.468 01:19:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:22.469 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78556 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.728 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78556 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78556 ']' 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:22.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:22.728 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.728 [2024-07-21 01:19:07.903068] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:22.728 [2024-07-21 01:19:07.903202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78556 ] 00:09:22.987 [2024-07-21 01:19:08.074602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.987 [2024-07-21 01:19:08.136248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.555 01:19:08 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:23.555 01:19:08 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:09:23.555 01:19:08 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:23.555 01:19:08 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:23.555 01:19:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.121 Waiting for block devices as requested 00:09:24.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.378 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.378 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.636 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.905 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:29.905 BYT; 00:09:29.905 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:29.905 BYT; 00:09:29.905 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.905 01:19:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.905 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:29.906 01:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:30.841 The operation has completed successfully. 00:09:30.841 01:19:16 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:31.776 The operation has completed successfully. 00:09:31.776 01:19:17 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.540 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:33.540 01:19:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.540 01:19:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.540 [] 00:09:33.540 01:19:18 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.540 01:19:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:33.540 01:19:18 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.540 01:19:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.799 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.799 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:33.799 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.799 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.129 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1bf4e587-cc22-4bc6-9fc2-97ddb01a32b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1bf4e587-cc22-4bc6-9fc2-97ddb01a32b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2de29a7c-ae75-4f8e-9249-04f2d203b634"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2de29a7c-ae75-4f8e-9249-04f2d203b634",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0eae18fd-edc6-4521-81f1-7d5c5d80e55a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0eae18fd-edc6-4521-81f1-7d5c5d80e55a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "733a39e4-4cfb-4e5b-b878-3e0cd38b2059"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "733a39e4-4cfb-4e5b-b878-3e0cd38b2059",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8fd72c43-2a39-480b-bb98-107a2738e604"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8fd72c43-2a39-480b-bb98-107a2738e604",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:34.130 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78556 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78556 ']' 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78556 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78556 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78556' 00:09:34.130 killing process with pid 78556 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78556 00:09:34.130 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78556 00:09:34.698 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:34.698 01:19:19 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:34.698 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:34.698 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.698 01:19:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.958 ************************************ 00:09:34.958 START TEST bdev_hello_world 00:09:34.958 ************************************ 00:09:34.958 01:19:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:34.958 [2024-07-21 01:19:20.102711] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:34.958 [2024-07-21 01:19:20.102867] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79175 ] 00:09:35.216 [2024-07-21 01:19:20.275990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.216 [2024-07-21 01:19:20.342443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.474 [2024-07-21 01:19:20.764785] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:35.475 [2024-07-21 01:19:20.764847] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:35.475 [2024-07-21 01:19:20.764875] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:35.475 [2024-07-21 01:19:20.767352] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:35.475 [2024-07-21 01:19:20.767968] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:35.475 [2024-07-21 01:19:20.767998] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:35.475 [2024-07-21 01:19:20.768246] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:35.475 00:09:35.475 [2024-07-21 01:19:20.768267] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:36.043 00:09:36.043 real 0m1.094s 00:09:36.043 user 0m0.677s 00:09:36.043 sys 0m0.312s 00:09:36.043 01:19:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.043 01:19:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:36.043 ************************************ 00:09:36.043 END TEST bdev_hello_world 00:09:36.043 ************************************ 00:09:36.043 01:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:36.043 01:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:36.043 01:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.043 01:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.043 ************************************ 00:09:36.043 START TEST bdev_bounds 00:09:36.043 ************************************ 00:09:36.043 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:09:36.043 01:19:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79206 00:09:36.043 01:19:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:36.044 Process bdevio pid: 79206 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79206' 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79206 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 79206 ']' 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:36.044 01:19:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:36.044 [2024-07-21 01:19:21.273305] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:36.044 [2024-07-21 01:19:21.273423] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79206 ] 00:09:36.303 [2024-07-21 01:19:21.446300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:36.303 [2024-07-21 01:19:21.515444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.303 [2024-07-21 01:19:21.515535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.303 [2024-07-21 01:19:21.515646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.871 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:36.871 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:09:36.871 01:19:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:36.871 I/O targets: 00:09:36.871 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:36.871 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:36.871 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:36.871 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:36.871 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:36.871 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:36.871 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:36.871 00:09:36.871 00:09:36.871 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.871 http://cunit.sourceforge.net/ 00:09:36.871 00:09:36.871 00:09:36.871 Suite: bdevio tests on: Nvme3n1 00:09:36.871 Test: blockdev write read block ...passed 00:09:36.871 Test: blockdev write zeroes read block ...passed 00:09:36.871 Test: blockdev write zeroes read no split ...passed 00:09:36.871 Test: blockdev write zeroes read split ...passed 00:09:36.871 Test: blockdev write zeroes read split partial ...passed 00:09:36.872 Test: blockdev reset ...[2024-07-21 01:19:22.173100] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:36.872 passed 00:09:36.872 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.175538] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:36.872 passed 00:09:36.872 Test: blockdev write read size > 128k ...passed 00:09:36.872 Test: blockdev write read invalid size ...passed 00:09:36.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:36.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:36.872 Test: blockdev write read max offset ...passed 00:09:36.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:36.872 Test: blockdev writev readv 8 blocks ...passed 00:09:36.872 Test: blockdev writev readv 30 x 1block ...passed 00:09:36.872 Test: blockdev writev readv block ...passed 00:09:36.872 Test: blockdev writev readv size > 128k ...passed 00:09:37.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.132 Test: blockdev comparev and writev ...[2024-07-21 01:19:22.183929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4404000 len:0x1000 00:09:37.132 [2024-07-21 01:19:22.183995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev nvme passthru rw ...passed 00:09:37.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.132 Test: blockdev nvme admin passthru ...[2024-07-21 01:19:22.184996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:37.132 [2024-07-21 01:19:22.185029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev copy ...passed 00:09:37.132 Suite: bdevio tests on: Nvme2n3 00:09:37.132 Test: blockdev write read block ...passed 00:09:37.132 Test: blockdev write zeroes read block ...passed 00:09:37.132 Test: blockdev write zeroes read no split ...passed 00:09:37.132 Test: blockdev write zeroes read split ...passed 00:09:37.132 Test: blockdev write zeroes read split partial ...passed 00:09:37.132 Test: blockdev reset ...[2024-07-21 01:19:22.209826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:37.132 passed 00:09:37.132 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.212494] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.132 passed 00:09:37.132 Test: blockdev write read size > 128k ...passed 00:09:37.132 Test: blockdev write read invalid size ...passed 00:09:37.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.132 Test: blockdev write read max offset ...passed 00:09:37.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.132 Test: blockdev writev readv 8 blocks ...passed 00:09:37.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.132 Test: blockdev writev readv block ...passed 00:09:37.132 Test: blockdev writev readv size > 128k ...passed 00:09:37.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.132 Test: blockdev comparev and writev ...[2024-07-21 01:19:22.220578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4404000 len:0x1000 00:09:37.132 [2024-07-21 01:19:22.220624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev nvme passthru rw ...passed 00:09:37.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.132 Test: blockdev nvme admin passthru ...[2024-07-21 01:19:22.221862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:37.132 [2024-07-21 01:19:22.221896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev copy ...passed 00:09:37.132 Suite: bdevio tests on: Nvme2n2 00:09:37.132 Test: blockdev write read block ...passed 00:09:37.132 Test: blockdev write zeroes read block ...passed 00:09:37.132 Test: blockdev write zeroes read no split ...passed 00:09:37.132 Test: blockdev write zeroes read split ...passed 00:09:37.132 Test: blockdev write zeroes read split partial ...passed 00:09:37.132 Test: blockdev reset ...[2024-07-21 01:19:22.249486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:37.132 passed 00:09:37.132 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.251880] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.132 passed 00:09:37.132 Test: blockdev write read size > 128k ...passed 00:09:37.132 Test: blockdev write read invalid size ...passed 00:09:37.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.132 Test: blockdev write read max offset ...passed 00:09:37.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.132 Test: blockdev writev readv 8 blocks ...passed 00:09:37.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.132 Test: blockdev writev readv block ...passed 00:09:37.132 Test: blockdev writev readv size > 128k ...passed 00:09:37.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.132 Test: blockdev comparev and writev ...[2024-07-21 01:19:22.260627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7822000 len:0x1000 00:09:37.132 [2024-07-21 01:19:22.260674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev nvme passthru rw ...passed 00:09:37.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.132 Test: blockdev nvme admin passthru ...[2024-07-21 01:19:22.261846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:37.132 [2024-07-21 01:19:22.261878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev copy ...passed 00:09:37.132 Suite: bdevio tests on: Nvme2n1 00:09:37.132 Test: blockdev write read block ...passed 00:09:37.132 Test: blockdev write zeroes read block ...passed 00:09:37.132 Test: blockdev write zeroes read no split ...passed 00:09:37.132 Test: blockdev write zeroes read split ...passed 00:09:37.132 Test: blockdev write zeroes read split partial ...passed 00:09:37.132 Test: blockdev reset ...[2024-07-21 01:19:22.289314] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:37.132 [2024-07-21 01:19:22.291745] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.132 passed 00:09:37.132 Test: blockdev write read 8 blocks ...passed 00:09:37.132 Test: blockdev write read size > 128k ...passed 00:09:37.132 Test: blockdev write read invalid size ...passed 00:09:37.132 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.132 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.132 Test: blockdev write read max offset ...passed 00:09:37.132 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.132 Test: blockdev writev readv 8 blocks ...passed 00:09:37.132 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.132 Test: blockdev writev readv block ...passed 00:09:37.132 Test: blockdev writev readv size > 128k ...passed 00:09:37.132 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.132 Test: blockdev comparev and writev ...[2024-07-21 01:19:22.300412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d440d000 len:0x1000 00:09:37.132 [2024-07-21 01:19:22.300455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev nvme passthru rw ...passed 00:09:37.132 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.132 Test: blockdev nvme admin passthru ...[2024-07-21 01:19:22.301570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:37.132 [2024-07-21 01:19:22.301606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:37.132 passed 00:09:37.132 Test: blockdev copy ...passed 00:09:37.132 Suite: bdevio tests on: Nvme1n1 00:09:37.132 Test: blockdev write read block ...passed 00:09:37.132 Test: blockdev write zeroes read block ...passed 00:09:37.132 Test: blockdev write zeroes read no split ...passed 00:09:37.132 Test: blockdev write zeroes read split ...passed 00:09:37.132 Test: blockdev write zeroes read split partial ...passed 00:09:37.133 Test: blockdev reset ...[2024-07-21 01:19:22.328554] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:37.133 passed 00:09:37.133 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.330740] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.133 passed 00:09:37.133 Test: blockdev write read size > 128k ...passed 00:09:37.133 Test: blockdev write read invalid size ...passed 00:09:37.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.133 Test: blockdev write read max offset ...passed 00:09:37.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.133 Test: blockdev writev readv 8 blocks ...passed 00:09:37.133 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.133 Test: blockdev writev readv block ...passed 00:09:37.133 Test: blockdev writev readv size > 128k ...passed 00:09:37.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.133 Test: blockdev comparev and writev ...[2024-07-21 01:19:22.339360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4032000 len:0x1000 00:09:37.133 [2024-07-21 01:19:22.339402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:37.133 passed 00:09:37.133 Test: blockdev nvme passthru rw ...passed 00:09:37.133 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.133 Test: blockdev nvme admin passthru ...[2024-07-21 01:19:22.340601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:37.133 [2024-07-21 01:19:22.340633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:37.133 passed 00:09:37.133 Test: blockdev copy ...passed 00:09:37.133 Suite: bdevio tests on: Nvme0n1p2 00:09:37.133 Test: blockdev write read block ...passed 00:09:37.133 Test: blockdev write zeroes read block ...passed 00:09:37.133 Test: blockdev write zeroes read no split ...passed 00:09:37.133 Test: blockdev write zeroes read split ...passed 00:09:37.133 Test: blockdev write zeroes read split partial ...passed 00:09:37.133 Test: blockdev reset ...[2024-07-21 01:19:22.370109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:37.133 passed 00:09:37.133 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.372233] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.133 passed 00:09:37.133 Test: blockdev write read size > 128k ...passed 00:09:37.133 Test: blockdev write read invalid size ...passed 00:09:37.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.133 Test: blockdev write read max offset ...passed 00:09:37.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.133 Test: blockdev writev readv 8 blocks ...passed 00:09:37.133 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.133 Test: blockdev writev readv block ...passed 00:09:37.133 Test: blockdev writev readv size > 128k ...passed 00:09:37.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.133 Test: blockdev comparev and writev ...passed 00:09:37.133 Test: blockdev nvme passthru rw ...passed 00:09:37.133 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.133 Test: blockdev nvme admin passthru ...passed 00:09:37.133 Test: blockdev copy ...[2024-07-21 01:19:22.379641] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:37.133 separate metadata which is not supported yet. 00:09:37.133 passed 00:09:37.133 Suite: bdevio tests on: Nvme0n1p1 00:09:37.133 Test: blockdev write read block ...passed 00:09:37.133 Test: blockdev write zeroes read block ...passed 00:09:37.133 Test: blockdev write zeroes read no split ...passed 00:09:37.133 Test: blockdev write zeroes read split ...passed 00:09:37.133 Test: blockdev write zeroes read split partial ...passed 00:09:37.133 Test: blockdev reset ...[2024-07-21 01:19:22.399897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:37.133 passed 00:09:37.133 Test: blockdev write read 8 blocks ...[2024-07-21 01:19:22.402023] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:37.133 passed 00:09:37.133 Test: blockdev write read size > 128k ...passed 00:09:37.133 Test: blockdev write read invalid size ...passed 00:09:37.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:37.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:37.133 Test: blockdev write read max offset ...passed 00:09:37.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:37.133 Test: blockdev writev readv 8 blocks ...passed 00:09:37.133 Test: blockdev writev readv 30 x 1block ...passed 00:09:37.133 Test: blockdev writev readv block ...passed 00:09:37.133 Test: blockdev writev readv size > 128k ...passed 00:09:37.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:37.133 Test: blockdev comparev and writev ...passed 00:09:37.133 Test: blockdev nvme passthru rw ...passed 00:09:37.133 Test: blockdev nvme passthru vendor specific ...passed 00:09:37.133 Test: blockdev nvme admin passthru ...passed 00:09:37.133 Test: blockdev copy ...[2024-07-21 01:19:22.409339] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:37.133 separate metadata which is not supported yet. 00:09:37.133 passed 00:09:37.133 00:09:37.133 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.133 suites 7 7 n/a 0 0 00:09:37.133 tests 161 161 161 0 0 00:09:37.133 asserts 1006 1006 1006 0 n/a 00:09:37.133 00:09:37.133 Elapsed time = 0.583 seconds 00:09:37.133 0 00:09:37.133 01:19:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79206 00:09:37.133 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 79206 ']' 00:09:37.133 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 79206 00:09:37.133 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:09:37.133 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79206 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:37.393 killing process with pid 79206 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79206' 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 79206 00:09:37.393 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 79206 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:37.653 00:09:37.653 real 0m1.587s 00:09:37.653 user 0m3.568s 00:09:37.653 sys 0m0.471s 00:09:37.653 ************************************ 00:09:37.653 END TEST bdev_bounds 00:09:37.653 ************************************ 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:37.653 01:19:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:37.653 01:19:22 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:37.653 01:19:22 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.653 01:19:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:37.653 ************************************ 00:09:37.653 START TEST bdev_nbd 00:09:37.653 ************************************ 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79260 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79260 /var/tmp/spdk-nbd.sock 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 79260 ']' 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:37.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:37.653 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:37.653 [2024-07-21 01:19:22.949031] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:37.653 [2024-07-21 01:19:22.949169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.911 [2024-07-21 01:19:23.118673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.911 [2024-07-21 01:19:23.190182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.477 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.736 1+0 records in 00:09:38.736 1+0 records out 00:09:38.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00573524 s, 714 kB/s 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.736 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:38.995 1+0 records in 00:09:38.995 1+0 records out 00:09:38.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00232743 s, 1.8 MB/s 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:38.995 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.254 1+0 records in 00:09:39.254 1+0 records out 00:09:39.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00436187 s, 939 kB/s 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.254 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.513 1+0 records in 00:09:39.513 1+0 records out 00:09:39.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736989 s, 5.6 MB/s 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.513 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.772 1+0 records in 00:09:39.772 1+0 records out 00:09:39.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776187 s, 5.3 MB/s 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:39.772 01:19:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.030 1+0 records in 00:09:40.030 1+0 records out 00:09:40.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793953 s, 5.2 MB/s 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.030 1+0 records in 00:09:40.030 1+0 records out 00:09:40.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723969 s, 5.7 MB/s 00:09:40.030 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.031 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:40.031 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd0", 00:09:40.289 "bdev_name": "Nvme0n1p1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd1", 00:09:40.289 "bdev_name": "Nvme0n1p2" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd2", 00:09:40.289 "bdev_name": "Nvme1n1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd3", 00:09:40.289 "bdev_name": "Nvme2n1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd4", 00:09:40.289 "bdev_name": "Nvme2n2" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd5", 00:09:40.289 "bdev_name": "Nvme2n3" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd6", 00:09:40.289 "bdev_name": "Nvme3n1" 00:09:40.289 } 00:09:40.289 ]' 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd0", 00:09:40.289 "bdev_name": "Nvme0n1p1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd1", 00:09:40.289 "bdev_name": "Nvme0n1p2" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd2", 00:09:40.289 "bdev_name": "Nvme1n1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd3", 00:09:40.289 "bdev_name": "Nvme2n1" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd4", 00:09:40.289 "bdev_name": "Nvme2n2" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd5", 00:09:40.289 "bdev_name": "Nvme2n3" 00:09:40.289 }, 00:09:40.289 { 00:09:40.289 "nbd_device": "/dev/nbd6", 00:09:40.289 "bdev_name": "Nvme3n1" 00:09:40.289 } 00:09:40.289 ]' 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.289 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.569 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.838 01:19:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.095 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.353 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.612 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.871 01:19:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:41.871 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:42.130 /dev/nbd0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.130 1+0 records in 00:09:42.130 1+0 records out 00:09:42.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752258 s, 5.4 MB/s 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.130 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:42.389 /dev/nbd1 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.389 1+0 records in 00:09:42.389 1+0 records out 00:09:42.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977401 s, 4.2 MB/s 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.389 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:42.648 /dev/nbd10 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.648 1+0 records in 00:09:42.648 1+0 records out 00:09:42.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674722 s, 6.1 MB/s 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.648 01:19:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:42.906 /dev/nbd11 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:42.906 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.906 1+0 records in 00:09:42.906 1+0 records out 00:09:42.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785016 s, 5.2 MB/s 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:42.907 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:43.165 /dev/nbd12 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.165 1+0 records in 00:09:43.165 1+0 records out 00:09:43.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731818 s, 5.6 MB/s 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.165 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:43.424 /dev/nbd13 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.424 1+0 records in 00:09:43.424 1+0 records out 00:09:43.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0037711 s, 1.1 MB/s 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.424 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:43.683 /dev/nbd14 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:43.683 1+0 records in 00:09:43.683 1+0 records out 00:09:43.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827644 s, 4.9 MB/s 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.683 01:19:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.941 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd0", 00:09:43.941 "bdev_name": "Nvme0n1p1" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd1", 00:09:43.941 "bdev_name": "Nvme0n1p2" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd10", 00:09:43.941 "bdev_name": "Nvme1n1" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd11", 00:09:43.941 "bdev_name": "Nvme2n1" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd12", 00:09:43.941 "bdev_name": "Nvme2n2" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd13", 00:09:43.941 "bdev_name": "Nvme2n3" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd14", 00:09:43.941 "bdev_name": "Nvme3n1" 00:09:43.941 } 00:09:43.941 ]' 00:09:43.941 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd0", 00:09:43.941 "bdev_name": "Nvme0n1p1" 00:09:43.941 }, 00:09:43.941 { 00:09:43.941 "nbd_device": "/dev/nbd1", 00:09:43.942 "bdev_name": "Nvme0n1p2" 00:09:43.942 }, 00:09:43.942 { 00:09:43.942 "nbd_device": "/dev/nbd10", 00:09:43.942 "bdev_name": "Nvme1n1" 00:09:43.942 }, 00:09:43.942 { 00:09:43.942 "nbd_device": "/dev/nbd11", 00:09:43.942 "bdev_name": "Nvme2n1" 00:09:43.942 }, 00:09:43.942 { 00:09:43.942 "nbd_device": "/dev/nbd12", 00:09:43.942 "bdev_name": "Nvme2n2" 00:09:43.942 }, 00:09:43.942 { 00:09:43.942 "nbd_device": "/dev/nbd13", 00:09:43.942 "bdev_name": "Nvme2n3" 00:09:43.942 }, 00:09:43.942 { 00:09:43.942 "nbd_device": "/dev/nbd14", 00:09:43.942 "bdev_name": "Nvme3n1" 00:09:43.942 } 00:09:43.942 ]' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:43.942 /dev/nbd1 00:09:43.942 /dev/nbd10 00:09:43.942 /dev/nbd11 00:09:43.942 /dev/nbd12 00:09:43.942 /dev/nbd13 00:09:43.942 /dev/nbd14' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:43.942 /dev/nbd1 00:09:43.942 /dev/nbd10 00:09:43.942 /dev/nbd11 00:09:43.942 /dev/nbd12 00:09:43.942 /dev/nbd13 00:09:43.942 /dev/nbd14' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:43.942 256+0 records in 00:09:43.942 256+0 records out 00:09:43.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111621 s, 93.9 MB/s 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:43.942 256+0 records in 00:09:43.942 256+0 records out 00:09:43.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140287 s, 7.5 MB/s 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.942 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:44.201 256+0 records in 00:09:44.201 256+0 records out 00:09:44.201 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147897 s, 7.1 MB/s 00:09:44.201 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.201 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:44.459 256+0 records in 00:09:44.459 256+0 records out 00:09:44.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14028 s, 7.5 MB/s 00:09:44.459 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.459 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:44.459 256+0 records in 00:09:44.459 256+0 records out 00:09:44.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146237 s, 7.2 MB/s 00:09:44.459 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.459 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:44.718 256+0 records in 00:09:44.718 256+0 records out 00:09:44.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139629 s, 7.5 MB/s 00:09:44.718 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.718 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:44.718 256+0 records in 00:09:44.718 256+0 records out 00:09:44.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139526 s, 7.5 MB/s 00:09:44.718 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.718 01:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:44.977 256+0 records in 00:09:44.977 256+0 records out 00:09:44.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13923 s, 7.5 MB/s 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:44.977 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.978 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.237 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.496 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.769 01:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.028 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.287 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:46.547 01:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:46.805 malloc_lvol_verify 00:09:46.805 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:47.063 d81dad68-479b-48ad-8102-2a34abd0de30 00:09:47.063 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:47.322 9d424a4f-02c6-4503-bb97-05c489af7e6d 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:47.322 /dev/nbd0 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:47.322 mke2fs 1.46.5 (30-Dec-2021) 00:09:47.322 Discarding device blocks: 0/4096 done 00:09:47.322 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:47.322 00:09:47.322 Allocating group tables: 0/1 done 00:09:47.322 Writing inode tables: 0/1 done 00:09:47.322 Creating journal (1024 blocks): done 00:09:47.322 Writing superblocks and filesystem accounting information: 0/1 done 00:09:47.322 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.322 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79260 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 79260 ']' 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 79260 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79260 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:47.582 killing process with pid 79260 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79260' 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 79260 00:09:47.582 01:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 79260 00:09:48.148 01:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:48.148 00:09:48.148 real 0m10.363s 00:09:48.148 user 0m13.373s 00:09:48.148 sys 0m4.799s 00:09:48.148 ************************************ 00:09:48.148 END TEST bdev_nbd 00:09:48.148 ************************************ 00:09:48.148 01:19:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:48.148 01:19:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:48.148 skipping fio tests on NVMe due to multi-ns failures. 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:48.148 01:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:48.148 01:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:48.148 01:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:48.148 01:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.148 ************************************ 00:09:48.148 START TEST bdev_verify 00:09:48.148 ************************************ 00:09:48.148 01:19:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:48.148 [2024-07-21 01:19:33.373942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:48.148 [2024-07-21 01:19:33.374073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79661 ] 00:09:48.443 [2024-07-21 01:19:33.542334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.443 [2024-07-21 01:19:33.619266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.443 [2024-07-21 01:19:33.619311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.008 Running I/O for 5 seconds... 00:09:54.272 00:09:54.272 Latency(us) 00:09:54.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.272 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x5e800 00:09:54.272 Nvme0n1p1 : 5.10 1280.38 5.00 0.00 0.00 99791.26 20529.35 93066.38 00:09:54.272 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x5e800 length 0x5e800 00:09:54.272 Nvme0n1p1 : 5.07 1136.46 4.44 0.00 0.00 112198.94 27793.58 96856.42 00:09:54.272 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x5e7ff 00:09:54.272 Nvme0n1p2 : 5.10 1279.61 5.00 0.00 0.00 99692.37 21371.58 90960.81 00:09:54.272 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:54.272 Nvme0n1p2 : 5.07 1135.96 4.44 0.00 0.00 112020.48 25372.17 85486.32 00:09:54.272 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0xa0000 00:09:54.272 Nvme1n1 : 5.10 1279.25 5.00 0.00 0.00 99373.41 21897.97 88434.12 00:09:54.272 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0xa0000 length 0xa0000 00:09:54.272 Nvme1n1 : 5.07 1135.63 4.44 0.00 0.00 111757.67 24529.94 84644.09 00:09:54.272 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x80000 00:09:54.272 Nvme2n1 : 5.10 1278.96 5.00 0.00 0.00 99206.88 19792.40 85907.43 00:09:54.272 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x80000 length 0x80000 00:09:54.272 Nvme2n1 : 5.07 1135.29 4.43 0.00 0.00 111522.24 21476.86 85907.43 00:09:54.272 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x80000 00:09:54.272 Nvme2n2 : 5.11 1278.60 4.99 0.00 0.00 99077.69 19792.40 85907.43 00:09:54.272 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x80000 length 0x80000 00:09:54.272 Nvme2n2 : 5.10 1142.39 4.46 0.00 0.00 110620.06 6790.48 85065.20 00:09:54.272 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x80000 00:09:54.272 Nvme2n3 : 5.11 1278.27 4.99 0.00 0.00 98956.86 17792.10 90539.69 00:09:54.272 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x80000 length 0x80000 00:09:54.272 Nvme2n3 : 5.11 1152.60 4.50 0.00 0.00 109632.58 5948.25 85907.43 00:09:54.272 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x0 length 0x20000 00:09:54.272 Nvme3n1 : 5.11 1277.98 4.99 0.00 0.00 98871.05 16318.20 93487.50 00:09:54.272 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.272 Verification LBA range: start 0x20000 length 0x20000 00:09:54.272 Nvme3n1 : 5.11 1152.16 4.50 0.00 0.00 109525.19 6474.64 89697.47 00:09:54.272 =================================================================================================================== 00:09:54.272 Total : 16943.55 66.19 0.00 0.00 104811.54 5948.25 96856.42 00:09:54.533 00:09:54.533 real 0m6.501s 00:09:54.533 user 0m11.932s 00:09:54.533 sys 0m0.387s 00:09:54.533 01:19:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.533 01:19:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:54.533 ************************************ 00:09:54.533 END TEST bdev_verify 00:09:54.533 ************************************ 00:09:54.793 01:19:39 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:54.793 01:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:54.793 01:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.793 01:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:54.793 ************************************ 00:09:54.793 START TEST bdev_verify_big_io 00:09:54.793 ************************************ 00:09:54.793 01:19:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:54.793 [2024-07-21 01:19:39.956348] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:54.793 [2024-07-21 01:19:39.957325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79752 ] 00:09:55.053 [2024-07-21 01:19:40.149354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.053 [2024-07-21 01:19:40.217489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.053 [2024-07-21 01:19:40.217592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.621 Running I/O for 5 seconds... 00:10:02.178 00:10:02.178 Latency(us) 00:10:02.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.178 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x5e80 00:10:02.178 Nvme0n1p1 : 5.39 201.80 12.61 0.00 0.00 619908.36 20002.96 710841.88 00:10:02.178 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x5e80 length 0x5e80 00:10:02.178 Nvme0n1p1 : 5.62 92.10 5.76 0.00 0.00 1339102.37 19897.68 1468848.63 00:10:02.178 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x5e7f 00:10:02.178 Nvme0n1p2 : 5.45 205.12 12.82 0.00 0.00 598798.96 55166.05 609774.32 00:10:02.178 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:02.178 Nvme0n1p2 : 5.66 96.24 6.02 0.00 0.00 1220309.34 36005.32 1334091.87 00:10:02.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0xa000 00:10:02.178 Nvme1n1 : 5.45 211.24 13.20 0.00 0.00 577805.41 56850.51 619881.07 00:10:02.178 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0xa000 length 0xa000 00:10:02.178 Nvme1n1 : 5.70 106.44 6.65 0.00 0.00 1067143.73 36847.55 1003937.82 00:10:02.178 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x8000 00:10:02.178 Nvme2n1 : 5.46 210.96 13.18 0.00 0.00 569024.23 56008.28 616512.15 00:10:02.178 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x8000 length 0x8000 00:10:02.178 Nvme2n1 : 5.80 118.19 7.39 0.00 0.00 930818.46 26319.68 1590129.71 00:10:02.178 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x8000 00:10:02.178 Nvme2n2 : 5.51 213.71 13.36 0.00 0.00 552282.40 52639.36 592929.72 00:10:02.178 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x8000 length 0x8000 00:10:02.178 Nvme2n2 : 5.96 141.72 8.86 0.00 0.00 748370.44 23056.04 2385194.56 00:10:02.178 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x8000 00:10:02.178 Nvme2n3 : 5.54 225.68 14.10 0.00 0.00 518540.82 5237.62 603036.48 00:10:02.178 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x8000 length 0x8000 00:10:02.178 Nvme2n3 : 6.17 195.94 12.25 0.00 0.00 525617.91 9738.28 2425621.59 00:10:02.178 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x0 length 0x2000 00:10:02.178 Nvme3n1 : 5.55 230.83 14.43 0.00 0.00 499594.71 5027.06 613143.24 00:10:02.178 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.178 Verification LBA range: start 0x2000 length 0x2000 00:10:02.178 Nvme3n1 : 6.27 257.99 16.12 0.00 0.00 388934.07 532.97 2439097.27 00:10:02.178 =================================================================================================================== 00:10:02.178 Total : 2507.95 156.75 0.00 0.00 641990.16 532.97 2439097.27 00:10:02.747 00:10:02.747 real 0m7.948s 00:10:02.747 user 0m14.750s 00:10:02.747 sys 0m0.426s 00:10:02.747 ************************************ 00:10:02.747 END TEST bdev_verify_big_io 00:10:02.747 ************************************ 00:10:02.747 01:19:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:02.747 01:19:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:02.747 01:19:47 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:02.747 01:19:47 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:02.747 01:19:47 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:02.747 01:19:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:02.747 ************************************ 00:10:02.747 START TEST bdev_write_zeroes 00:10:02.747 ************************************ 00:10:02.747 01:19:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:02.747 [2024-07-21 01:19:47.989792] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:02.747 [2024-07-21 01:19:47.989947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79858 ] 00:10:03.006 [2024-07-21 01:19:48.162384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.006 [2024-07-21 01:19:48.233314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.601 Running I/O for 1 seconds... 00:10:04.532 00:10:04.532 Latency(us) 00:10:04.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.532 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme0n1p1 : 1.01 10284.57 40.17 0.00 0.00 12411.91 8948.69 28425.25 00:10:04.532 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme0n1p2 : 1.02 10273.43 40.13 0.00 0.00 12407.21 9001.33 28004.14 00:10:04.532 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme1n1 : 1.02 10263.85 40.09 0.00 0.00 12384.24 9264.53 26530.24 00:10:04.532 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme2n1 : 1.02 10254.57 40.06 0.00 0.00 12377.69 9317.17 26003.84 00:10:04.532 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme2n2 : 1.02 10245.63 40.02 0.00 0.00 12368.26 9264.53 25582.73 00:10:04.532 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme2n3 : 1.02 10323.51 40.33 0.00 0.00 12181.16 3395.24 17055.15 00:10:04.532 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:04.532 Nvme3n1 : 1.02 10314.62 40.29 0.00 0.00 12170.66 3579.48 16212.92 00:10:04.532 =================================================================================================================== 00:10:04.532 Total : 71960.17 281.09 0.00 0.00 12328.20 3395.24 28425.25 00:10:04.790 00:10:04.790 real 0m2.178s 00:10:04.790 user 0m1.733s 00:10:04.790 sys 0m0.332s 00:10:04.790 01:19:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:04.790 ************************************ 00:10:04.790 END TEST bdev_write_zeroes 00:10:04.790 ************************************ 00:10:04.790 01:19:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:05.048 01:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.048 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:05.048 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.049 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:05.049 ************************************ 00:10:05.049 START TEST bdev_json_nonenclosed 00:10:05.049 ************************************ 00:10:05.049 01:19:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.049 [2024-07-21 01:19:50.245192] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:05.049 [2024-07-21 01:19:50.245324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79900 ] 00:10:05.306 [2024-07-21 01:19:50.417684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.306 [2024-07-21 01:19:50.488375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.306 [2024-07-21 01:19:50.488491] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:05.306 [2024-07-21 01:19:50.488529] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:05.306 [2024-07-21 01:19:50.488549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:05.565 00:10:05.565 real 0m0.482s 00:10:05.565 user 0m0.190s 00:10:05.565 sys 0m0.188s 00:10:05.565 01:19:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.565 01:19:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:05.565 ************************************ 00:10:05.565 END TEST bdev_json_nonenclosed 00:10:05.565 ************************************ 00:10:05.565 01:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.565 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:10:05.565 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.565 01:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:05.565 ************************************ 00:10:05.565 START TEST bdev_json_nonarray 00:10:05.565 ************************************ 00:10:05.565 01:19:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:05.565 [2024-07-21 01:19:50.800759] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:05.566 [2024-07-21 01:19:50.800899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79930 ] 00:10:05.824 [2024-07-21 01:19:50.974047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.824 [2024-07-21 01:19:51.049943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.824 [2024-07-21 01:19:51.050052] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:05.824 [2024-07-21 01:19:51.050079] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:05.824 [2024-07-21 01:19:51.050100] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:06.083 00:10:06.083 real 0m0.483s 00:10:06.083 user 0m0.201s 00:10:06.083 sys 0m0.177s 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:06.083 ************************************ 00:10:06.083 END TEST bdev_json_nonarray 00:10:06.083 ************************************ 00:10:06.083 01:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:10:06.083 01:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:10:06.083 01:19:51 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:06.083 01:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.083 01:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.083 01:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:06.083 ************************************ 00:10:06.083 START TEST bdev_gpt_uuid 00:10:06.083 ************************************ 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79951 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79951 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 79951 ']' 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:06.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:06.083 01:19:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:06.083 [2024-07-21 01:19:51.377788] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:06.083 [2024-07-21 01:19:51.377915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79951 ] 00:10:06.341 [2024-07-21 01:19:51.543920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.341 [2024-07-21 01:19:51.610528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.908 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:06.908 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:10:06.908 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.908 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.908 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:07.167 Some configs were skipped because the RPC state that can call them passed over. 00:10:07.167 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.167 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:10:07.167 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.167 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.424 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:10:07.424 { 00:10:07.424 "name": "Nvme0n1p1", 00:10:07.424 "aliases": [ 00:10:07.424 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:07.424 ], 00:10:07.424 "product_name": "GPT Disk", 00:10:07.424 "block_size": 4096, 00:10:07.424 "num_blocks": 774144, 00:10:07.424 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:07.424 "md_size": 64, 00:10:07.424 "md_interleave": false, 00:10:07.424 "dif_type": 0, 00:10:07.424 "assigned_rate_limits": { 00:10:07.424 "rw_ios_per_sec": 0, 00:10:07.424 "rw_mbytes_per_sec": 0, 00:10:07.424 "r_mbytes_per_sec": 0, 00:10:07.424 "w_mbytes_per_sec": 0 00:10:07.425 }, 00:10:07.425 "claimed": false, 00:10:07.425 "zoned": false, 00:10:07.425 "supported_io_types": { 00:10:07.425 "read": true, 00:10:07.425 "write": true, 00:10:07.425 "unmap": true, 00:10:07.425 "write_zeroes": true, 00:10:07.425 "flush": true, 00:10:07.425 "reset": true, 00:10:07.425 "compare": true, 00:10:07.425 "compare_and_write": false, 00:10:07.425 "abort": true, 00:10:07.425 "nvme_admin": false, 00:10:07.425 "nvme_io": false 00:10:07.425 }, 00:10:07.425 "driver_specific": { 00:10:07.425 "gpt": { 00:10:07.425 "base_bdev": "Nvme0n1", 00:10:07.425 "offset_blocks": 256, 00:10:07.425 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:07.425 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:07.425 "partition_name": "SPDK_TEST_first" 00:10:07.425 } 00:10:07.425 } 00:10:07.425 } 00:10:07.425 ]' 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:10:07.425 { 00:10:07.425 "name": "Nvme0n1p2", 00:10:07.425 "aliases": [ 00:10:07.425 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:07.425 ], 00:10:07.425 "product_name": "GPT Disk", 00:10:07.425 "block_size": 4096, 00:10:07.425 "num_blocks": 774143, 00:10:07.425 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:07.425 "md_size": 64, 00:10:07.425 "md_interleave": false, 00:10:07.425 "dif_type": 0, 00:10:07.425 "assigned_rate_limits": { 00:10:07.425 "rw_ios_per_sec": 0, 00:10:07.425 "rw_mbytes_per_sec": 0, 00:10:07.425 "r_mbytes_per_sec": 0, 00:10:07.425 "w_mbytes_per_sec": 0 00:10:07.425 }, 00:10:07.425 "claimed": false, 00:10:07.425 "zoned": false, 00:10:07.425 "supported_io_types": { 00:10:07.425 "read": true, 00:10:07.425 "write": true, 00:10:07.425 "unmap": true, 00:10:07.425 "write_zeroes": true, 00:10:07.425 "flush": true, 00:10:07.425 "reset": true, 00:10:07.425 "compare": true, 00:10:07.425 "compare_and_write": false, 00:10:07.425 "abort": true, 00:10:07.425 "nvme_admin": false, 00:10:07.425 "nvme_io": false 00:10:07.425 }, 00:10:07.425 "driver_specific": { 00:10:07.425 "gpt": { 00:10:07.425 "base_bdev": "Nvme0n1", 00:10:07.425 "offset_blocks": 774400, 00:10:07.425 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:07.425 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:07.425 "partition_name": "SPDK_TEST_second" 00:10:07.425 } 00:10:07.425 } 00:10:07.425 } 00:10:07.425 ]' 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:10:07.425 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79951 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 79951 ']' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 79951 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79951 00:10:07.683 killing process with pid 79951 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79951' 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 79951 00:10:07.683 01:19:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 79951 00:10:08.248 ************************************ 00:10:08.248 END TEST bdev_gpt_uuid 00:10:08.248 ************************************ 00:10:08.248 00:10:08.248 real 0m2.141s 00:10:08.248 user 0m2.100s 00:10:08.248 sys 0m0.614s 00:10:08.248 01:19:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.248 01:19:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:08.248 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:08.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.073 Waiting for block devices as requested 00:10:09.332 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.332 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.332 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.591 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.865 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:14.865 01:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:10:14.865 01:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:10:14.865 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:14.865 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:14.865 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:14.865 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:10:14.865 01:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:14.865 ************************************ 00:10:14.865 END TEST blockdev_nvme_gpt 00:10:14.865 ************************************ 00:10:14.865 00:10:14.865 real 0m52.457s 00:10:14.865 user 1m2.257s 00:10:14.865 sys 0m12.241s 00:10:14.865 01:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:14.865 01:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:15.123 01:20:00 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:15.123 01:20:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:15.123 01:20:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:15.123 01:20:00 -- common/autotest_common.sh@10 -- # set +x 00:10:15.123 ************************************ 00:10:15.123 START TEST nvme 00:10:15.123 ************************************ 00:10:15.123 01:20:00 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:15.123 * Looking for test storage... 00:10:15.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:15.123 01:20:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:16.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:16.738 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.738 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.738 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.738 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:16.738 01:20:02 nvme -- nvme/nvme.sh@79 -- # uname 00:10:16.738 01:20:02 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:16.738 01:20:02 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:16.738 01:20:02 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1067 -- # stubpid=80583 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:16.738 Waiting for stub to ready for secondary processes... 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80583 ]] 00:10:16.738 01:20:02 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:16.996 [2024-07-21 01:20:02.090737] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:16.996 [2024-07-21 01:20:02.090877] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:17.927 01:20:03 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:17.927 01:20:03 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80583 ]] 00:10:17.927 01:20:03 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:10:17.927 [2024-07-21 01:20:03.077565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.927 [2024-07-21 01:20:03.112299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.927 [2024-07-21 01:20:03.112417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.927 [2024-07-21 01:20:03.112553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.927 [2024-07-21 01:20:03.127414] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:17.927 [2024-07-21 01:20:03.127480] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.927 [2024-07-21 01:20:03.142343] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:17.927 [2024-07-21 01:20:03.142541] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:17.927 [2024-07-21 01:20:03.143235] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.927 [2024-07-21 01:20:03.143441] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:17.927 [2024-07-21 01:20:03.143514] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:17.927 [2024-07-21 01:20:03.144173] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.927 [2024-07-21 01:20:03.144382] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:17.927 [2024-07-21 01:20:03.144453] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:17.927 [2024-07-21 01:20:03.145267] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:17.927 [2024-07-21 01:20:03.145439] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:17.927 [2024-07-21 01:20:03.145499] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:17.927 [2024-07-21 01:20:03.145580] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:17.927 [2024-07-21 01:20:03.145640] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:18.858 01:20:04 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:18.858 done. 00:10:18.858 01:20:04 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:10:18.859 01:20:04 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:18.859 01:20:04 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:10:18.859 01:20:04 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.859 01:20:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.859 ************************************ 00:10:18.859 START TEST nvme_reset 00:10:18.859 ************************************ 00:10:18.859 01:20:04 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:19.117 Initializing NVMe Controllers 00:10:19.117 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:19.117 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:19.117 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:19.117 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:19.117 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:19.117 00:10:19.117 real 0m0.268s 00:10:19.117 user 0m0.081s 00:10:19.117 sys 0m0.141s 00:10:19.117 01:20:04 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.117 01:20:04 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:19.117 ************************************ 00:10:19.117 END TEST nvme_reset 00:10:19.117 ************************************ 00:10:19.117 01:20:04 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:19.117 01:20:04 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:19.117 01:20:04 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.117 01:20:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.117 ************************************ 00:10:19.117 START TEST nvme_identify 00:10:19.117 ************************************ 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:10:19.117 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:19.117 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:19.117 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:19.117 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:19.117 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:19.375 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:19.375 01:20:04 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:19.375 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:19.635 [2024-07-21 01:20:04.749908] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80616 terminated unexpected 00:10:19.635 ===================================================== 00:10:19.635 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.635 ===================================================== 00:10:19.635 Controller Capabilities/Features 00:10:19.635 ================================ 00:10:19.635 Vendor ID: 1b36 00:10:19.635 Subsystem Vendor ID: 1af4 00:10:19.635 Serial Number: 12340 00:10:19.635 Model Number: QEMU NVMe Ctrl 00:10:19.635 Firmware Version: 8.0.0 00:10:19.635 Recommended Arb Burst: 6 00:10:19.635 IEEE OUI Identifier: 00 54 52 00:10:19.635 Multi-path I/O 00:10:19.635 May have multiple subsystem ports: No 00:10:19.635 May have multiple controllers: No 00:10:19.635 Associated with SR-IOV VF: No 00:10:19.635 Max Data Transfer Size: 524288 00:10:19.635 Max Number of Namespaces: 256 00:10:19.635 Max Number of I/O Queues: 64 00:10:19.635 NVMe Specification Version (VS): 1.4 00:10:19.635 NVMe Specification Version (Identify): 1.4 00:10:19.635 Maximum Queue Entries: 2048 00:10:19.635 Contiguous Queues Required: Yes 00:10:19.635 Arbitration Mechanisms Supported 00:10:19.635 Weighted Round Robin: Not Supported 00:10:19.635 Vendor Specific: Not Supported 00:10:19.635 Reset Timeout: 7500 ms 00:10:19.635 Doorbell Stride: 4 bytes 00:10:19.635 NVM Subsystem Reset: Not Supported 00:10:19.635 Command Sets Supported 00:10:19.635 NVM Command Set: Supported 00:10:19.635 Boot Partition: Not Supported 00:10:19.635 Memory Page Size Minimum: 4096 bytes 00:10:19.635 Memory Page Size Maximum: 65536 bytes 00:10:19.635 Persistent Memory Region: Not Supported 00:10:19.635 Optional Asynchronous Events Supported 00:10:19.635 Namespace Attribute Notices: Supported 00:10:19.635 Firmware Activation Notices: Not Supported 00:10:19.635 ANA Change Notices: Not Supported 00:10:19.635 PLE Aggregate Log Change Notices: Not Supported 00:10:19.635 LBA Status Info Alert Notices: Not Supported 00:10:19.635 EGE Aggregate Log Change Notices: Not Supported 00:10:19.635 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.635 Zone Descriptor Change Notices: Not Supported 00:10:19.635 Discovery Log Change Notices: Not Supported 00:10:19.635 Controller Attributes 00:10:19.635 128-bit Host Identifier: Not Supported 00:10:19.635 Non-Operational Permissive Mode: Not Supported 00:10:19.635 NVM Sets: Not Supported 00:10:19.636 Read Recovery Levels: Not Supported 00:10:19.636 Endurance Groups: Not Supported 00:10:19.636 Predictable Latency Mode: Not Supported 00:10:19.636 Traffic Based Keep ALive: Not Supported 00:10:19.636 Namespace Granularity: Not Supported 00:10:19.636 SQ Associations: Not Supported 00:10:19.636 UUID List: Not Supported 00:10:19.636 Multi-Domain Subsystem: Not Supported 00:10:19.636 Fixed Capacity Management: Not Supported 00:10:19.636 Variable Capacity Management: Not Supported 00:10:19.636 Delete Endurance Group: Not Supported 00:10:19.636 Delete NVM Set: Not Supported 00:10:19.636 Extended LBA Formats Supported: Supported 00:10:19.636 Flexible Data Placement Supported: Not Supported 00:10:19.636 00:10:19.636 Controller Memory Buffer Support 00:10:19.636 ================================ 00:10:19.636 Supported: No 00:10:19.636 00:10:19.636 Persistent Memory Region Support 00:10:19.636 ================================ 00:10:19.636 Supported: No 00:10:19.636 00:10:19.636 Admin Command Set Attributes 00:10:19.636 ============================ 00:10:19.636 Security Send/Receive: Not Supported 00:10:19.636 Format NVM: Supported 00:10:19.636 Firmware Activate/Download: Not Supported 00:10:19.636 Namespace Management: Supported 00:10:19.636 Device Self-Test: Not Supported 00:10:19.636 Directives: Supported 00:10:19.636 NVMe-MI: Not Supported 00:10:19.636 Virtualization Management: Not Supported 00:10:19.636 Doorbell Buffer Config: Supported 00:10:19.636 Get LBA Status Capability: Not Supported 00:10:19.636 Command & Feature Lockdown Capability: Not Supported 00:10:19.636 Abort Command Limit: 4 00:10:19.636 Async Event Request Limit: 4 00:10:19.636 Number of Firmware Slots: N/A 00:10:19.636 Firmware Slot 1 Read-Only: N/A 00:10:19.636 Firmware Activation Without Reset: N/A 00:10:19.636 Multiple Update Detection Support: N/A 00:10:19.636 Firmware Update Granularity: No Information Provided 00:10:19.636 Per-Namespace SMART Log: Yes 00:10:19.636 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.636 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.636 Command Effects Log Page: Supported 00:10:19.636 Get Log Page Extended Data: Supported 00:10:19.636 Telemetry Log Pages: Not Supported 00:10:19.636 Persistent Event Log Pages: Not Supported 00:10:19.636 Supported Log Pages Log Page: May Support 00:10:19.636 Commands Supported & Effects Log Page: Not Supported 00:10:19.636 Feature Identifiers & Effects Log Page:May Support 00:10:19.636 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.636 Data Area 4 for Telemetry Log: Not Supported 00:10:19.636 Error Log Page Entries Supported: 1 00:10:19.636 Keep Alive: Not Supported 00:10:19.636 00:10:19.636 NVM Command Set Attributes 00:10:19.636 ========================== 00:10:19.636 Submission Queue Entry Size 00:10:19.636 Max: 64 00:10:19.636 Min: 64 00:10:19.636 Completion Queue Entry Size 00:10:19.636 Max: 16 00:10:19.636 Min: 16 00:10:19.636 Number of Namespaces: 256 00:10:19.636 Compare Command: Supported 00:10:19.636 Write Uncorrectable Command: Not Supported 00:10:19.636 Dataset Management Command: Supported 00:10:19.636 Write Zeroes Command: Supported 00:10:19.636 Set Features Save Field: Supported 00:10:19.636 Reservations: Not Supported 00:10:19.636 Timestamp: Supported 00:10:19.636 Copy: Supported 00:10:19.636 Volatile Write Cache: Present 00:10:19.636 Atomic Write Unit (Normal): 1 00:10:19.636 Atomic Write Unit (PFail): 1 00:10:19.636 Atomic Compare & Write Unit: 1 00:10:19.636 Fused Compare & Write: Not Supported 00:10:19.636 Scatter-Gather List 00:10:19.636 SGL Command Set: Supported 00:10:19.636 SGL Keyed: Not Supported 00:10:19.636 SGL Bit Bucket Descriptor: Not Supported 00:10:19.636 SGL Metadata Pointer: Not Supported 00:10:19.636 Oversized SGL: Not Supported 00:10:19.636 SGL Metadata Address: Not Supported 00:10:19.636 SGL Offset: Not Supported 00:10:19.636 Transport SGL Data Block: Not Supported 00:10:19.636 Replay Protected Memory Block: Not Supported 00:10:19.636 00:10:19.636 Firmware Slot Information 00:10:19.636 ========================= 00:10:19.636 Active slot: 1 00:10:19.636 Slot 1 Firmware Revision: 1.0 00:10:19.636 00:10:19.636 00:10:19.636 Commands Supported and Effects 00:10:19.636 ============================== 00:10:19.636 Admin Commands 00:10:19.636 -------------- 00:10:19.636 Delete I/O Submission Queue (00h): Supported 00:10:19.636 Create I/O Submission Queue (01h): Supported 00:10:19.636 Get Log Page (02h): Supported 00:10:19.636 Delete I/O Completion Queue (04h): Supported 00:10:19.636 Create I/O Completion Queue (05h): Supported 00:10:19.636 Identify (06h): Supported 00:10:19.636 Abort (08h): Supported 00:10:19.636 Set Features (09h): Supported 00:10:19.636 Get Features (0Ah): Supported 00:10:19.636 Asynchronous Event Request (0Ch): Supported 00:10:19.636 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.636 Directive Send (19h): Supported 00:10:19.636 Directive Receive (1Ah): Supported 00:10:19.636 Virtualization Management (1Ch): Supported 00:10:19.636 Doorbell Buffer Config (7Ch): Supported 00:10:19.636 Format NVM (80h): Supported LBA-Change 00:10:19.636 I/O Commands 00:10:19.636 ------------ 00:10:19.636 Flush (00h): Supported LBA-Change 00:10:19.636 Write (01h): Supported LBA-Change 00:10:19.636 Read (02h): Supported 00:10:19.636 Compare (05h): Supported 00:10:19.636 Write Zeroes (08h): Supported LBA-Change 00:10:19.636 Dataset Management (09h): Supported LBA-Change 00:10:19.636 Unknown (0Ch): Supported 00:10:19.636 Unknown (12h): Supported 00:10:19.636 Copy (19h): Supported LBA-Change 00:10:19.636 Unknown (1Dh): Supported LBA-Change 00:10:19.636 00:10:19.636 Error Log 00:10:19.636 ========= 00:10:19.636 00:10:19.636 Arbitration 00:10:19.636 =========== 00:10:19.636 Arbitration Burst: no limit 00:10:19.636 00:10:19.636 Power Management 00:10:19.636 ================ 00:10:19.636 Number of Power States: 1 00:10:19.636 Current Power State: Power State #0 00:10:19.636 Power State #0: 00:10:19.636 Max Power: 25.00 W 00:10:19.636 Non-Operational State: Operational 00:10:19.636 Entry Latency: 16 microseconds 00:10:19.636 Exit Latency: 4 microseconds 00:10:19.636 Relative Read Throughput: 0 00:10:19.636 Relative Read Latency: 0 00:10:19.636 Relative Write Throughput: 0 00:10:19.636 Relative Write Latency: 0 00:10:19.636 Idle Power[2024-07-21 01:20:04.751349] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80616 terminated unexpected 00:10:19.636 : Not Reported 00:10:19.636 Active Power: Not Reported 00:10:19.636 Non-Operational Permissive Mode: Not Supported 00:10:19.636 00:10:19.636 Health Information 00:10:19.636 ================== 00:10:19.636 Critical Warnings: 00:10:19.636 Available Spare Space: OK 00:10:19.636 Temperature: OK 00:10:19.636 Device Reliability: OK 00:10:19.636 Read Only: No 00:10:19.636 Volatile Memory Backup: OK 00:10:19.636 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.636 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.636 Available Spare: 0% 00:10:19.636 Available Spare Threshold: 0% 00:10:19.636 Life Percentage Used: 0% 00:10:19.636 Data Units Read: 1185 00:10:19.636 Data Units Written: 1023 00:10:19.636 Host Read Commands: 50875 00:10:19.636 Host Write Commands: 49464 00:10:19.636 Controller Busy Time: 0 minutes 00:10:19.636 Power Cycles: 0 00:10:19.636 Power On Hours: 0 hours 00:10:19.636 Unsafe Shutdowns: 0 00:10:19.636 Unrecoverable Media Errors: 0 00:10:19.636 Lifetime Error Log Entries: 0 00:10:19.636 Warning Temperature Time: 0 minutes 00:10:19.636 Critical Temperature Time: 0 minutes 00:10:19.636 00:10:19.636 Number of Queues 00:10:19.636 ================ 00:10:19.636 Number of I/O Submission Queues: 64 00:10:19.636 Number of I/O Completion Queues: 64 00:10:19.636 00:10:19.636 ZNS Specific Controller Data 00:10:19.636 ============================ 00:10:19.636 Zone Append Size Limit: 0 00:10:19.636 00:10:19.636 00:10:19.636 Active Namespaces 00:10:19.636 ================= 00:10:19.636 Namespace ID:1 00:10:19.636 Error Recovery Timeout: Unlimited 00:10:19.636 Command Set Identifier: NVM (00h) 00:10:19.636 Deallocate: Supported 00:10:19.636 Deallocated/Unwritten Error: Supported 00:10:19.636 Deallocated Read Value: All 0x00 00:10:19.636 Deallocate in Write Zeroes: Not Supported 00:10:19.636 Deallocated Guard Field: 0xFFFF 00:10:19.636 Flush: Supported 00:10:19.636 Reservation: Not Supported 00:10:19.636 Metadata Transferred as: Separate Metadata Buffer 00:10:19.636 Namespace Sharing Capabilities: Private 00:10:19.636 Size (in LBAs): 1548666 (5GiB) 00:10:19.636 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.636 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.636 Thin Provisioning: Not Supported 00:10:19.636 Per-NS Atomic Units: No 00:10:19.636 Maximum Single Source Range Length: 128 00:10:19.636 Maximum Copy Length: 128 00:10:19.637 Maximum Source Range Count: 128 00:10:19.637 NGUID/EUI64 Never Reused: No 00:10:19.637 Namespace Write Protected: No 00:10:19.637 Number of LBA Formats: 8 00:10:19.637 Current LBA Format: LBA Format #07 00:10:19.637 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.637 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.637 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.637 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.637 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.637 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.637 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.637 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.637 00:10:19.637 ===================================================== 00:10:19.637 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:19.637 ===================================================== 00:10:19.637 Controller Capabilities/Features 00:10:19.637 ================================ 00:10:19.637 Vendor ID: 1b36 00:10:19.637 Subsystem Vendor ID: 1af4 00:10:19.637 Serial Number: 12341 00:10:19.637 Model Number: QEMU NVMe Ctrl 00:10:19.637 Firmware Version: 8.0.0 00:10:19.637 Recommended Arb Burst: 6 00:10:19.637 IEEE OUI Identifier: 00 54 52 00:10:19.637 Multi-path I/O 00:10:19.637 May have multiple subsystem ports: No 00:10:19.637 May have multiple controllers: No 00:10:19.637 Associated with SR-IOV VF: No 00:10:19.637 Max Data Transfer Size: 524288 00:10:19.637 Max Number of Namespaces: 256 00:10:19.637 Max Number of I/O Queues: 64 00:10:19.637 NVMe Specification Version (VS): 1.4 00:10:19.637 NVMe Specification Version (Identify): 1.4 00:10:19.637 Maximum Queue Entries: 2048 00:10:19.637 Contiguous Queues Required: Yes 00:10:19.637 Arbitration Mechanisms Supported 00:10:19.637 Weighted Round Robin: Not Supported 00:10:19.637 Vendor Specific: Not Supported 00:10:19.637 Reset Timeout: 7500 ms 00:10:19.637 Doorbell Stride: 4 bytes 00:10:19.637 NVM Subsystem Reset: Not Supported 00:10:19.637 Command Sets Supported 00:10:19.637 NVM Command Set: Supported 00:10:19.637 Boot Partition: Not Supported 00:10:19.637 Memory Page Size Minimum: 4096 bytes 00:10:19.637 Memory Page Size Maximum: 65536 bytes 00:10:19.637 Persistent Memory Region: Not Supported 00:10:19.637 Optional Asynchronous Events Supported 00:10:19.637 Namespace Attribute Notices: Supported 00:10:19.637 Firmware Activation Notices: Not Supported 00:10:19.637 ANA Change Notices: Not Supported 00:10:19.637 PLE Aggregate Log Change Notices: Not Supported 00:10:19.637 LBA Status Info Alert Notices: Not Supported 00:10:19.637 EGE Aggregate Log Change Notices: Not Supported 00:10:19.637 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.637 Zone Descriptor Change Notices: Not Supported 00:10:19.637 Discovery Log Change Notices: Not Supported 00:10:19.637 Controller Attributes 00:10:19.637 128-bit Host Identifier: Not Supported 00:10:19.637 Non-Operational Permissive Mode: Not Supported 00:10:19.637 NVM Sets: Not Supported 00:10:19.637 Read Recovery Levels: Not Supported 00:10:19.637 Endurance Groups: Not Supported 00:10:19.637 Predictable Latency Mode: Not Supported 00:10:19.637 Traffic Based Keep ALive: Not Supported 00:10:19.637 Namespace Granularity: Not Supported 00:10:19.637 SQ Associations: Not Supported 00:10:19.637 UUID List: Not Supported 00:10:19.637 Multi-Domain Subsystem: Not Supported 00:10:19.637 Fixed Capacity Management: Not Supported 00:10:19.637 Variable Capacity Management: Not Supported 00:10:19.637 Delete Endurance Group: Not Supported 00:10:19.637 Delete NVM Set: Not Supported 00:10:19.637 Extended LBA Formats Supported: Supported 00:10:19.637 Flexible Data Placement Supported: Not Supported 00:10:19.637 00:10:19.637 Controller Memory Buffer Support 00:10:19.637 ================================ 00:10:19.637 Supported: No 00:10:19.637 00:10:19.637 Persistent Memory Region Support 00:10:19.637 ================================ 00:10:19.637 Supported: No 00:10:19.637 00:10:19.637 Admin Command Set Attributes 00:10:19.637 ============================ 00:10:19.637 Security Send/Receive: Not Supported 00:10:19.637 Format NVM: Supported 00:10:19.637 Firmware Activate/Download: Not Supported 00:10:19.637 Namespace Management: Supported 00:10:19.637 Device Self-Test: Not Supported 00:10:19.637 Directives: Supported 00:10:19.637 NVMe-MI: Not Supported 00:10:19.637 Virtualization Management: Not Supported 00:10:19.637 Doorbell Buffer Config: Supported 00:10:19.637 Get LBA Status Capability: Not Supported 00:10:19.637 Command & Feature Lockdown Capability: Not Supported 00:10:19.637 Abort Command Limit: 4 00:10:19.637 Async Event Request Limit: 4 00:10:19.637 Number of Firmware Slots: N/A 00:10:19.637 Firmware Slot 1 Read-Only: N/A 00:10:19.637 Firmware Activation Without Reset: N/A 00:10:19.637 Multiple Update Detection Support: N/A 00:10:19.637 Firmware Update Granularity: No Information Provided 00:10:19.637 Per-Namespace SMART Log: Yes 00:10:19.637 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.637 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:19.637 Command Effects Log Page: Supported 00:10:19.637 Get Log Page Extended Data: Supported 00:10:19.637 Telemetry Log Pages: Not Supported 00:10:19.637 Persistent Event Log Pages: Not Supported 00:10:19.637 Supported Log Pages Log Page: May Support 00:10:19.637 Commands Supported & Effects Log Page: Not Supported 00:10:19.637 Feature Identifiers & Effects Log Page:May Support 00:10:19.637 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.637 Data Area 4 for Telemetry Log: Not Supported 00:10:19.637 Error Log Page Entries Supported: 1 00:10:19.637 Keep Alive: Not Supported 00:10:19.637 00:10:19.637 NVM Command Set Attributes 00:10:19.637 ========================== 00:10:19.637 Submission Queue Entry Size 00:10:19.637 Max: 64 00:10:19.637 Min: 64 00:10:19.637 Completion Queue Entry Size 00:10:19.637 Max: 16 00:10:19.637 Min: 16 00:10:19.637 Number of Namespaces: 256 00:10:19.637 Compare Command: Supported 00:10:19.637 Write Uncorrectable Command: Not Supported 00:10:19.637 Dataset Management Command: Supported 00:10:19.637 Write Zeroes Command: Supported 00:10:19.637 Set Features Save Field: Supported 00:10:19.637 Reservations: Not Supported 00:10:19.637 Timestamp: Supported 00:10:19.637 Copy: Supported 00:10:19.637 Volatile Write Cache: Present 00:10:19.637 Atomic Write Unit (Normal): 1 00:10:19.637 Atomic Write Unit (PFail): 1 00:10:19.637 Atomic Compare & Write Unit: 1 00:10:19.637 Fused Compare & Write: Not Supported 00:10:19.637 Scatter-Gather List 00:10:19.637 SGL Command Set: Supported 00:10:19.637 SGL Keyed: Not Supported 00:10:19.637 SGL Bit Bucket Descriptor: Not Supported 00:10:19.637 SGL Metadata Pointer: Not Supported 00:10:19.637 Oversized SGL: Not Supported 00:10:19.637 SGL Metadata Address: Not Supported 00:10:19.637 SGL Offset: Not Supported 00:10:19.637 Transport SGL Data Block: Not Supported 00:10:19.637 Replay Protected Memory Block: Not Supported 00:10:19.637 00:10:19.637 Firmware Slot Information 00:10:19.637 ========================= 00:10:19.637 Active slot: 1 00:10:19.637 Slot 1 Firmware Revision: 1.0 00:10:19.637 00:10:19.637 00:10:19.637 Commands Supported and Effects 00:10:19.637 ============================== 00:10:19.637 Admin Commands 00:10:19.637 -------------- 00:10:19.637 Delete I/O Submission Queue (00h): Supported 00:10:19.637 Create I/O Submission Queue (01h): Supported 00:10:19.637 Get Log Page (02h): Supported 00:10:19.637 Delete I/O Completion Queue (04h): Supported 00:10:19.637 Create I/O Completion Queue (05h): Supported 00:10:19.637 Identify (06h): Supported 00:10:19.637 Abort (08h): Supported 00:10:19.637 Set Features (09h): Supported 00:10:19.637 Get Features (0Ah): Supported 00:10:19.637 Asynchronous Event Request (0Ch): Supported 00:10:19.637 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.637 Directive Send (19h): Supported 00:10:19.637 Directive Receive (1Ah): Supported 00:10:19.637 Virtualization Management (1Ch): Supported 00:10:19.637 Doorbell Buffer Config (7Ch): Supported 00:10:19.637 Format NVM (80h): Supported LBA-Change 00:10:19.637 I/O Commands 00:10:19.637 ------------ 00:10:19.637 Flush (00h): Supported LBA-Change 00:10:19.637 Write (01h): Supported LBA-Change 00:10:19.637 Read (02h): Supported 00:10:19.637 Compare (05h): Supported 00:10:19.637 Write Zeroes (08h): Supported LBA-Change 00:10:19.637 Dataset Management (09h): Supported LBA-Change 00:10:19.637 Unknown (0Ch): Supported 00:10:19.637 Unknown (12h): Supported 00:10:19.637 Copy (19h): Supported LBA-Change 00:10:19.637 Unknown (1Dh): Supported LBA-Change 00:10:19.637 00:10:19.637 Error Log 00:10:19.637 ========= 00:10:19.637 00:10:19.637 Arbitration 00:10:19.637 =========== 00:10:19.637 Arbitration Burst: no limit 00:10:19.637 00:10:19.637 Power Management 00:10:19.637 ================ 00:10:19.637 Number of Power States: 1 00:10:19.637 Current Power State: Power State #0 00:10:19.637 Power State #0: 00:10:19.637 Max Power: 25.00 W 00:10:19.637 Non-Operational State: Operational 00:10:19.637 Entry Latency: 16 microseconds 00:10:19.637 Exit Latency: 4 microseconds 00:10:19.637 Relative Read Throughput: 0 00:10:19.638 Relative Read Latency: 0 00:10:19.638 Relative Write Throughput: 0 00:10:19.638 Relative Write Latency: 0 00:10:19.638 Idle Power: Not Reported 00:10:19.638 Active Power: Not Reported 00:10:19.638 Non-Operational Permissive Mode: Not Supported 00:10:19.638 00:10:19.638 Health Information 00:10:19.638 ================== 00:10:19.638 Critical Warnings: 00:10:19.638 Available Spare Space: OK 00:10:19.638 Temperature: OK 00:10:19.638 Device Reliability: OK 00:10:19.638 Read Only: No 00:10:19.638 Volatile Memory Backup: OK 00:10:19.638 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.638 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.638 Available Spare: 0% 00:10:19.638 Available Spare Threshold: 0% 00:10:19.638 Life Percentage Used: 0% 00:10:19.638 Data Units Read: 894 00:10:19.638 Data Units Written: 743 00:10:19.638 Host Read Commands: 37521 00:10:19.638 Host Write Commands: 35253 00:10:19.638 Controller Busy Time: 0 minutes 00:10:19.638 Power Cycles: 0 00:10:19.638 Power On Hours: 0 hours 00:10:19.638 Unsafe Shutdowns: 0 00:10:19.638 Unrecoverable Media Errors: 0 00:10:19.638 Lifetime Error Log Entries: 0 00:10:19.638 Warning Temperature Time: 0 minutes 00:10:19.638 Critical Temperature Time: 0 minutes 00:10:19.638 00:10:19.638 Number of Queues 00:10:19.638 ================ 00:10:19.638 Number of I/O Submission Queues: 64 00:10:19.638 Number of I/O Completion Queues: 64 00:10:19.638 00:10:19.638 ZNS Specific Controller Data 00:10:19.638 ============================ 00:10:19.638 Zone Append Size Limit: 0 00:10:19.638 00:10:19.638 00:10:19.638 Active Namespaces 00:10:19.638 ================= 00:10:19.638 Namespace ID:1 00:10:19.638 Error Recovery Timeout: Unlimited 00:10:19.638 Command Set Identifier: [2024-07-21 01:20:04.752369] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80616 terminated unexpected 00:10:19.638 NVM (00h) 00:10:19.638 Deallocate: Supported 00:10:19.638 Deallocated/Unwritten Error: Supported 00:10:19.638 Deallocated Read Value: All 0x00 00:10:19.638 Deallocate in Write Zeroes: Not Supported 00:10:19.638 Deallocated Guard Field: 0xFFFF 00:10:19.638 Flush: Supported 00:10:19.638 Reservation: Not Supported 00:10:19.638 Namespace Sharing Capabilities: Private 00:10:19.638 Size (in LBAs): 1310720 (5GiB) 00:10:19.638 Capacity (in LBAs): 1310720 (5GiB) 00:10:19.638 Utilization (in LBAs): 1310720 (5GiB) 00:10:19.638 Thin Provisioning: Not Supported 00:10:19.638 Per-NS Atomic Units: No 00:10:19.638 Maximum Single Source Range Length: 128 00:10:19.638 Maximum Copy Length: 128 00:10:19.638 Maximum Source Range Count: 128 00:10:19.638 NGUID/EUI64 Never Reused: No 00:10:19.638 Namespace Write Protected: No 00:10:19.638 Number of LBA Formats: 8 00:10:19.638 Current LBA Format: LBA Format #04 00:10:19.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.638 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.638 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.638 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.638 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.638 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.638 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.638 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.638 00:10:19.638 ===================================================== 00:10:19.638 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:19.638 ===================================================== 00:10:19.638 Controller Capabilities/Features 00:10:19.638 ================================ 00:10:19.638 Vendor ID: 1b36 00:10:19.638 Subsystem Vendor ID: 1af4 00:10:19.638 Serial Number: 12343 00:10:19.638 Model Number: QEMU NVMe Ctrl 00:10:19.638 Firmware Version: 8.0.0 00:10:19.638 Recommended Arb Burst: 6 00:10:19.638 IEEE OUI Identifier: 00 54 52 00:10:19.638 Multi-path I/O 00:10:19.638 May have multiple subsystem ports: No 00:10:19.638 May have multiple controllers: Yes 00:10:19.638 Associated with SR-IOV VF: No 00:10:19.638 Max Data Transfer Size: 524288 00:10:19.638 Max Number of Namespaces: 256 00:10:19.638 Max Number of I/O Queues: 64 00:10:19.638 NVMe Specification Version (VS): 1.4 00:10:19.638 NVMe Specification Version (Identify): 1.4 00:10:19.638 Maximum Queue Entries: 2048 00:10:19.638 Contiguous Queues Required: Yes 00:10:19.638 Arbitration Mechanisms Supported 00:10:19.638 Weighted Round Robin: Not Supported 00:10:19.638 Vendor Specific: Not Supported 00:10:19.638 Reset Timeout: 7500 ms 00:10:19.638 Doorbell Stride: 4 bytes 00:10:19.638 NVM Subsystem Reset: Not Supported 00:10:19.638 Command Sets Supported 00:10:19.638 NVM Command Set: Supported 00:10:19.638 Boot Partition: Not Supported 00:10:19.638 Memory Page Size Minimum: 4096 bytes 00:10:19.638 Memory Page Size Maximum: 65536 bytes 00:10:19.638 Persistent Memory Region: Not Supported 00:10:19.638 Optional Asynchronous Events Supported 00:10:19.638 Namespace Attribute Notices: Supported 00:10:19.638 Firmware Activation Notices: Not Supported 00:10:19.638 ANA Change Notices: Not Supported 00:10:19.638 PLE Aggregate Log Change Notices: Not Supported 00:10:19.638 LBA Status Info Alert Notices: Not Supported 00:10:19.638 EGE Aggregate Log Change Notices: Not Supported 00:10:19.638 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.638 Zone Descriptor Change Notices: Not Supported 00:10:19.638 Discovery Log Change Notices: Not Supported 00:10:19.638 Controller Attributes 00:10:19.638 128-bit Host Identifier: Not Supported 00:10:19.638 Non-Operational Permissive Mode: Not Supported 00:10:19.638 NVM Sets: Not Supported 00:10:19.638 Read Recovery Levels: Not Supported 00:10:19.638 Endurance Groups: Supported 00:10:19.638 Predictable Latency Mode: Not Supported 00:10:19.638 Traffic Based Keep ALive: Not Supported 00:10:19.638 Namespace Granularity: Not Supported 00:10:19.638 SQ Associations: Not Supported 00:10:19.638 UUID List: Not Supported 00:10:19.638 Multi-Domain Subsystem: Not Supported 00:10:19.638 Fixed Capacity Management: Not Supported 00:10:19.638 Variable Capacity Management: Not Supported 00:10:19.638 Delete Endurance Group: Not Supported 00:10:19.638 Delete NVM Set: Not Supported 00:10:19.638 Extended LBA Formats Supported: Supported 00:10:19.638 Flexible Data Placement Supported: Supported 00:10:19.638 00:10:19.638 Controller Memory Buffer Support 00:10:19.638 ================================ 00:10:19.638 Supported: No 00:10:19.638 00:10:19.638 Persistent Memory Region Support 00:10:19.638 ================================ 00:10:19.638 Supported: No 00:10:19.638 00:10:19.638 Admin Command Set Attributes 00:10:19.638 ============================ 00:10:19.638 Security Send/Receive: Not Supported 00:10:19.638 Format NVM: Supported 00:10:19.638 Firmware Activate/Download: Not Supported 00:10:19.638 Namespace Management: Supported 00:10:19.638 Device Self-Test: Not Supported 00:10:19.638 Directives: Supported 00:10:19.638 NVMe-MI: Not Supported 00:10:19.638 Virtualization Management: Not Supported 00:10:19.638 Doorbell Buffer Config: Supported 00:10:19.638 Get LBA Status Capability: Not Supported 00:10:19.638 Command & Feature Lockdown Capability: Not Supported 00:10:19.638 Abort Command Limit: 4 00:10:19.638 Async Event Request Limit: 4 00:10:19.638 Number of Firmware Slots: N/A 00:10:19.638 Firmware Slot 1 Read-Only: N/A 00:10:19.638 Firmware Activation Without Reset: N/A 00:10:19.638 Multiple Update Detection Support: N/A 00:10:19.638 Firmware Update Granularity: No Information Provided 00:10:19.638 Per-Namespace SMART Log: Yes 00:10:19.638 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.638 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:19.638 Command Effects Log Page: Supported 00:10:19.638 Get Log Page Extended Data: Supported 00:10:19.638 Telemetry Log Pages: Not Supported 00:10:19.638 Persistent Event Log Pages: Not Supported 00:10:19.638 Supported Log Pages Log Page: May Support 00:10:19.638 Commands Supported & Effects Log Page: Not Supported 00:10:19.638 Feature Identifiers & Effects Log Page:May Support 00:10:19.638 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.638 Data Area 4 for Telemetry Log: Not Supported 00:10:19.638 Error Log Page Entries Supported: 1 00:10:19.638 Keep Alive: Not Supported 00:10:19.638 00:10:19.638 NVM Command Set Attributes 00:10:19.638 ========================== 00:10:19.638 Submission Queue Entry Size 00:10:19.638 Max: 64 00:10:19.638 Min: 64 00:10:19.638 Completion Queue Entry Size 00:10:19.638 Max: 16 00:10:19.638 Min: 16 00:10:19.638 Number of Namespaces: 256 00:10:19.638 Compare Command: Supported 00:10:19.638 Write Uncorrectable Command: Not Supported 00:10:19.638 Dataset Management Command: Supported 00:10:19.638 Write Zeroes Command: Supported 00:10:19.638 Set Features Save Field: Supported 00:10:19.638 Reservations: Not Supported 00:10:19.638 Timestamp: Supported 00:10:19.638 Copy: Supported 00:10:19.638 Volatile Write Cache: Present 00:10:19.638 Atomic Write Unit (Normal): 1 00:10:19.638 Atomic Write Unit (PFail): 1 00:10:19.638 Atomic Compare & Write Unit: 1 00:10:19.638 Fused Compare & Write: Not Supported 00:10:19.638 Scatter-Gather List 00:10:19.639 SGL Command Set: Supported 00:10:19.639 SGL Keyed: Not Supported 00:10:19.639 SGL Bit Bucket Descriptor: Not Supported 00:10:19.639 SGL Metadata Pointer: Not Supported 00:10:19.639 Oversized SGL: Not Supported 00:10:19.639 SGL Metadata Address: Not Supported 00:10:19.639 SGL Offset: Not Supported 00:10:19.639 Transport SGL Data Block: Not Supported 00:10:19.639 Replay Protected Memory Block: Not Supported 00:10:19.639 00:10:19.639 Firmware Slot Information 00:10:19.639 ========================= 00:10:19.639 Active slot: 1 00:10:19.639 Slot 1 Firmware Revision: 1.0 00:10:19.639 00:10:19.639 00:10:19.639 Commands Supported and Effects 00:10:19.639 ============================== 00:10:19.639 Admin Commands 00:10:19.639 -------------- 00:10:19.639 Delete I/O Submission Queue (00h): Supported 00:10:19.639 Create I/O Submission Queue (01h): Supported 00:10:19.639 Get Log Page (02h): Supported 00:10:19.639 Delete I/O Completion Queue (04h): Supported 00:10:19.639 Create I/O Completion Queue (05h): Supported 00:10:19.639 Identify (06h): Supported 00:10:19.639 Abort (08h): Supported 00:10:19.639 Set Features (09h): Supported 00:10:19.639 Get Features (0Ah): Supported 00:10:19.639 Asynchronous Event Request (0Ch): Supported 00:10:19.639 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.639 Directive Send (19h): Supported 00:10:19.639 Directive Receive (1Ah): Supported 00:10:19.639 Virtualization Management (1Ch): Supported 00:10:19.639 Doorbell Buffer Config (7Ch): Supported 00:10:19.639 Format NVM (80h): Supported LBA-Change 00:10:19.639 I/O Commands 00:10:19.639 ------------ 00:10:19.639 Flush (00h): Supported LBA-Change 00:10:19.639 Write (01h): Supported LBA-Change 00:10:19.639 Read (02h): Supported 00:10:19.639 Compare (05h): Supported 00:10:19.639 Write Zeroes (08h): Supported LBA-Change 00:10:19.639 Dataset Management (09h): Supported LBA-Change 00:10:19.639 Unknown (0Ch): Supported 00:10:19.639 Unknown (12h): Supported 00:10:19.639 Copy (19h): Supported LBA-Change 00:10:19.639 Unknown (1Dh): Supported LBA-Change 00:10:19.639 00:10:19.639 Error Log 00:10:19.639 ========= 00:10:19.639 00:10:19.639 Arbitration 00:10:19.639 =========== 00:10:19.639 Arbitration Burst: no limit 00:10:19.639 00:10:19.639 Power Management 00:10:19.639 ================ 00:10:19.639 Number of Power States: 1 00:10:19.639 Current Power State: Power State #0 00:10:19.639 Power State #0: 00:10:19.639 Max Power: 25.00 W 00:10:19.639 Non-Operational State: Operational 00:10:19.639 Entry Latency: 16 microseconds 00:10:19.639 Exit Latency: 4 microseconds 00:10:19.639 Relative Read Throughput: 0 00:10:19.639 Relative Read Latency: 0 00:10:19.639 Relative Write Throughput: 0 00:10:19.639 Relative Write Latency: 0 00:10:19.639 Idle Power: Not Reported 00:10:19.639 Active Power: Not Reported 00:10:19.639 Non-Operational Permissive Mode: Not Supported 00:10:19.639 00:10:19.639 Health Information 00:10:19.639 ================== 00:10:19.639 Critical Warnings: 00:10:19.639 Available Spare Space: OK 00:10:19.639 Temperature: OK 00:10:19.639 Device Reliability: OK 00:10:19.639 Read Only: No 00:10:19.639 Volatile Memory Backup: OK 00:10:19.639 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.639 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.639 Available Spare: 0% 00:10:19.639 Available Spare Threshold: 0% 00:10:19.639 Life Percentage Used: 0% 00:10:19.639 Data Units Read: 1062 00:10:19.639 Data Units Written: 955 00:10:19.639 Host Read Commands: 38447 00:10:19.639 Host Write Commands: 37037 00:10:19.639 Controller Busy Time: 0 minutes 00:10:19.639 Power Cycles: 0 00:10:19.639 Power On Hours: 0 hours 00:10:19.639 Unsafe Shutdowns: 0 00:10:19.639 Unrecoverable Media Errors: 0 00:10:19.639 Lifetime Error Log Entries: 0 00:10:19.639 Warning Temperature Time: 0 minutes 00:10:19.639 Critical Temperature Time: 0 minutes 00:10:19.639 00:10:19.639 Number of Queues 00:10:19.639 ================ 00:10:19.639 Number of I/O Submission Queues: 64 00:10:19.639 Number of I/O Completion Queues: 64 00:10:19.639 00:10:19.639 ZNS Specific Controller Data 00:10:19.639 ============================ 00:10:19.639 Zone Append Size Limit: 0 00:10:19.639 00:10:19.639 00:10:19.639 Active Namespaces 00:10:19.639 ================= 00:10:19.639 Namespace ID:1 00:10:19.639 Error Recovery Timeout: Unlimited 00:10:19.639 Command Set Identifier: NVM (00h) 00:10:19.639 Deallocate: Supported 00:10:19.639 Deallocated/Unwritten Error: Supported 00:10:19.639 Deallocated Read Value: All 0x00 00:10:19.639 Deallocate in Write Zeroes: Not Supported 00:10:19.639 Deallocated Guard Field: 0xFFFF 00:10:19.639 Flush: Supported 00:10:19.639 Reservation: Not Supported 00:10:19.639 Namespace Sharing Capabilities: Multiple Controllers 00:10:19.639 Size (in LBAs): 262144 (1GiB) 00:10:19.639 Capacity (in LBAs): 262144 (1GiB) 00:10:19.639 Utilization (in LBAs): 262144 (1GiB) 00:10:19.639 Thin Provisioning: Not Supported 00:10:19.639 Per-NS Atomic Units: No 00:10:19.639 Maximum Single Source Range Length: 128 00:10:19.639 Maximum Copy Length: 128 00:10:19.639 Maximum Source Range Count: 128 00:10:19.639 NGUID/EUI64 Never Reused: No 00:10:19.639 Namespace Write Protected: No 00:10:19.639 Endurance group ID: 1 00:10:19.639 Number of LBA Formats: 8 00:10:19.639 Current LBA Format: LBA Format #04 00:10:19.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.639 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.639 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.639 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.639 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.639 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.639 LBA Format #06: Data Si[2024-07-21 01:20:04.754018] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80616 terminated unexpected 00:10:19.639 ze: 4096 Metadata Size: 16 00:10:19.639 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.639 00:10:19.639 Get Feature FDP: 00:10:19.639 ================ 00:10:19.639 Enabled: Yes 00:10:19.639 FDP configuration index: 0 00:10:19.639 00:10:19.639 FDP configurations log page 00:10:19.639 =========================== 00:10:19.639 Number of FDP configurations: 1 00:10:19.639 Version: 0 00:10:19.639 Size: 112 00:10:19.639 FDP Configuration Descriptor: 0 00:10:19.639 Descriptor Size: 96 00:10:19.639 Reclaim Group Identifier format: 2 00:10:19.639 FDP Volatile Write Cache: Not Present 00:10:19.639 FDP Configuration: Valid 00:10:19.639 Vendor Specific Size: 0 00:10:19.639 Number of Reclaim Groups: 2 00:10:19.639 Number of Recalim Unit Handles: 8 00:10:19.639 Max Placement Identifiers: 128 00:10:19.639 Number of Namespaces Suppprted: 256 00:10:19.639 Reclaim unit Nominal Size: 6000000 bytes 00:10:19.639 Estimated Reclaim Unit Time Limit: Not Reported 00:10:19.639 RUH Desc #000: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #001: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #002: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #003: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #004: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #005: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #006: RUH Type: Initially Isolated 00:10:19.639 RUH Desc #007: RUH Type: Initially Isolated 00:10:19.639 00:10:19.639 FDP reclaim unit handle usage log page 00:10:19.639 ====================================== 00:10:19.639 Number of Reclaim Unit Handles: 8 00:10:19.639 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:19.639 RUH Usage Desc #001: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #002: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #003: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #004: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #005: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #006: RUH Attributes: Unused 00:10:19.639 RUH Usage Desc #007: RUH Attributes: Unused 00:10:19.639 00:10:19.639 FDP statistics log page 00:10:19.639 ======================= 00:10:19.639 Host bytes with metadata written: 604676096 00:10:19.639 Media bytes with metadata written: 604758016 00:10:19.639 Media bytes erased: 0 00:10:19.639 00:10:19.639 FDP events log page 00:10:19.639 =================== 00:10:19.639 Number of FDP events: 0 00:10:19.639 00:10:19.639 ===================================================== 00:10:19.639 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:19.639 ===================================================== 00:10:19.639 Controller Capabilities/Features 00:10:19.639 ================================ 00:10:19.639 Vendor ID: 1b36 00:10:19.639 Subsystem Vendor ID: 1af4 00:10:19.639 Serial Number: 12342 00:10:19.639 Model Number: QEMU NVMe Ctrl 00:10:19.639 Firmware Version: 8.0.0 00:10:19.639 Recommended Arb Burst: 6 00:10:19.639 IEEE OUI Identifier: 00 54 52 00:10:19.639 Multi-path I/O 00:10:19.639 May have multiple subsystem ports: No 00:10:19.639 May have multiple controllers: No 00:10:19.639 Associated with SR-IOV VF: No 00:10:19.639 Max Data Transfer Size: 524288 00:10:19.639 Max Number of Namespaces: 256 00:10:19.639 Max Number of I/O Queues: 64 00:10:19.640 NVMe Specification Version (VS): 1.4 00:10:19.640 NVMe Specification Version (Identify): 1.4 00:10:19.640 Maximum Queue Entries: 2048 00:10:19.640 Contiguous Queues Required: Yes 00:10:19.640 Arbitration Mechanisms Supported 00:10:19.640 Weighted Round Robin: Not Supported 00:10:19.640 Vendor Specific: Not Supported 00:10:19.640 Reset Timeout: 7500 ms 00:10:19.640 Doorbell Stride: 4 bytes 00:10:19.640 NVM Subsystem Reset: Not Supported 00:10:19.640 Command Sets Supported 00:10:19.640 NVM Command Set: Supported 00:10:19.640 Boot Partition: Not Supported 00:10:19.640 Memory Page Size Minimum: 4096 bytes 00:10:19.640 Memory Page Size Maximum: 65536 bytes 00:10:19.640 Persistent Memory Region: Not Supported 00:10:19.640 Optional Asynchronous Events Supported 00:10:19.640 Namespace Attribute Notices: Supported 00:10:19.640 Firmware Activation Notices: Not Supported 00:10:19.640 ANA Change Notices: Not Supported 00:10:19.640 PLE Aggregate Log Change Notices: Not Supported 00:10:19.640 LBA Status Info Alert Notices: Not Supported 00:10:19.640 EGE Aggregate Log Change Notices: Not Supported 00:10:19.640 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.640 Zone Descriptor Change Notices: Not Supported 00:10:19.640 Discovery Log Change Notices: Not Supported 00:10:19.640 Controller Attributes 00:10:19.640 128-bit Host Identifier: Not Supported 00:10:19.640 Non-Operational Permissive Mode: Not Supported 00:10:19.640 NVM Sets: Not Supported 00:10:19.640 Read Recovery Levels: Not Supported 00:10:19.640 Endurance Groups: Not Supported 00:10:19.640 Predictable Latency Mode: Not Supported 00:10:19.640 Traffic Based Keep ALive: Not Supported 00:10:19.640 Namespace Granularity: Not Supported 00:10:19.640 SQ Associations: Not Supported 00:10:19.640 UUID List: Not Supported 00:10:19.640 Multi-Domain Subsystem: Not Supported 00:10:19.640 Fixed Capacity Management: Not Supported 00:10:19.640 Variable Capacity Management: Not Supported 00:10:19.640 Delete Endurance Group: Not Supported 00:10:19.640 Delete NVM Set: Not Supported 00:10:19.640 Extended LBA Formats Supported: Supported 00:10:19.640 Flexible Data Placement Supported: Not Supported 00:10:19.640 00:10:19.640 Controller Memory Buffer Support 00:10:19.640 ================================ 00:10:19.640 Supported: No 00:10:19.640 00:10:19.640 Persistent Memory Region Support 00:10:19.640 ================================ 00:10:19.640 Supported: No 00:10:19.640 00:10:19.640 Admin Command Set Attributes 00:10:19.640 ============================ 00:10:19.640 Security Send/Receive: Not Supported 00:10:19.640 Format NVM: Supported 00:10:19.640 Firmware Activate/Download: Not Supported 00:10:19.640 Namespace Management: Supported 00:10:19.640 Device Self-Test: Not Supported 00:10:19.640 Directives: Supported 00:10:19.640 NVMe-MI: Not Supported 00:10:19.640 Virtualization Management: Not Supported 00:10:19.640 Doorbell Buffer Config: Supported 00:10:19.640 Get LBA Status Capability: Not Supported 00:10:19.640 Command & Feature Lockdown Capability: Not Supported 00:10:19.640 Abort Command Limit: 4 00:10:19.640 Async Event Request Limit: 4 00:10:19.640 Number of Firmware Slots: N/A 00:10:19.640 Firmware Slot 1 Read-Only: N/A 00:10:19.640 Firmware Activation Without Reset: N/A 00:10:19.640 Multiple Update Detection Support: N/A 00:10:19.640 Firmware Update Granularity: No Information Provided 00:10:19.640 Per-Namespace SMART Log: Yes 00:10:19.640 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.640 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:19.640 Command Effects Log Page: Supported 00:10:19.640 Get Log Page Extended Data: Supported 00:10:19.640 Telemetry Log Pages: Not Supported 00:10:19.640 Persistent Event Log Pages: Not Supported 00:10:19.640 Supported Log Pages Log Page: May Support 00:10:19.640 Commands Supported & Effects Log Page: Not Supported 00:10:19.640 Feature Identifiers & Effects Log Page:May Support 00:10:19.640 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.640 Data Area 4 for Telemetry Log: Not Supported 00:10:19.640 Error Log Page Entries Supported: 1 00:10:19.640 Keep Alive: Not Supported 00:10:19.640 00:10:19.640 NVM Command Set Attributes 00:10:19.640 ========================== 00:10:19.640 Submission Queue Entry Size 00:10:19.640 Max: 64 00:10:19.640 Min: 64 00:10:19.640 Completion Queue Entry Size 00:10:19.640 Max: 16 00:10:19.640 Min: 16 00:10:19.640 Number of Namespaces: 256 00:10:19.640 Compare Command: Supported 00:10:19.640 Write Uncorrectable Command: Not Supported 00:10:19.640 Dataset Management Command: Supported 00:10:19.640 Write Zeroes Command: Supported 00:10:19.640 Set Features Save Field: Supported 00:10:19.640 Reservations: Not Supported 00:10:19.640 Timestamp: Supported 00:10:19.640 Copy: Supported 00:10:19.640 Volatile Write Cache: Present 00:10:19.640 Atomic Write Unit (Normal): 1 00:10:19.640 Atomic Write Unit (PFail): 1 00:10:19.640 Atomic Compare & Write Unit: 1 00:10:19.640 Fused Compare & Write: Not Supported 00:10:19.640 Scatter-Gather List 00:10:19.640 SGL Command Set: Supported 00:10:19.640 SGL Keyed: Not Supported 00:10:19.640 SGL Bit Bucket Descriptor: Not Supported 00:10:19.640 SGL Metadata Pointer: Not Supported 00:10:19.640 Oversized SGL: Not Supported 00:10:19.640 SGL Metadata Address: Not Supported 00:10:19.640 SGL Offset: Not Supported 00:10:19.640 Transport SGL Data Block: Not Supported 00:10:19.640 Replay Protected Memory Block: Not Supported 00:10:19.640 00:10:19.640 Firmware Slot Information 00:10:19.640 ========================= 00:10:19.640 Active slot: 1 00:10:19.640 Slot 1 Firmware Revision: 1.0 00:10:19.640 00:10:19.640 00:10:19.640 Commands Supported and Effects 00:10:19.640 ============================== 00:10:19.640 Admin Commands 00:10:19.640 -------------- 00:10:19.640 Delete I/O Submission Queue (00h): Supported 00:10:19.640 Create I/O Submission Queue (01h): Supported 00:10:19.640 Get Log Page (02h): Supported 00:10:19.640 Delete I/O Completion Queue (04h): Supported 00:10:19.640 Create I/O Completion Queue (05h): Supported 00:10:19.640 Identify (06h): Supported 00:10:19.640 Abort (08h): Supported 00:10:19.640 Set Features (09h): Supported 00:10:19.640 Get Features (0Ah): Supported 00:10:19.640 Asynchronous Event Request (0Ch): Supported 00:10:19.640 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.640 Directive Send (19h): Supported 00:10:19.640 Directive Receive (1Ah): Supported 00:10:19.640 Virtualization Management (1Ch): Supported 00:10:19.640 Doorbell Buffer Config (7Ch): Supported 00:10:19.640 Format NVM (80h): Supported LBA-Change 00:10:19.640 I/O Commands 00:10:19.640 ------------ 00:10:19.640 Flush (00h): Supported LBA-Change 00:10:19.640 Write (01h): Supported LBA-Change 00:10:19.640 Read (02h): Supported 00:10:19.640 Compare (05h): Supported 00:10:19.640 Write Zeroes (08h): Supported LBA-Change 00:10:19.640 Dataset Management (09h): Supported LBA-Change 00:10:19.640 Unknown (0Ch): Supported 00:10:19.640 Unknown (12h): Supported 00:10:19.640 Copy (19h): Supported LBA-Change 00:10:19.640 Unknown (1Dh): Supported LBA-Change 00:10:19.640 00:10:19.640 Error Log 00:10:19.640 ========= 00:10:19.640 00:10:19.640 Arbitration 00:10:19.640 =========== 00:10:19.640 Arbitration Burst: no limit 00:10:19.640 00:10:19.640 Power Management 00:10:19.640 ================ 00:10:19.640 Number of Power States: 1 00:10:19.641 Current Power State: Power State #0 00:10:19.641 Power State #0: 00:10:19.641 Max Power: 25.00 W 00:10:19.641 Non-Operational State: Operational 00:10:19.641 Entry Latency: 16 microseconds 00:10:19.641 Exit Latency: 4 microseconds 00:10:19.641 Relative Read Throughput: 0 00:10:19.641 Relative Read Latency: 0 00:10:19.641 Relative Write Throughput: 0 00:10:19.641 Relative Write Latency: 0 00:10:19.641 Idle Power: Not Reported 00:10:19.641 Active Power: Not Reported 00:10:19.641 Non-Operational Permissive Mode: Not Supported 00:10:19.641 00:10:19.641 Health Information 00:10:19.641 ================== 00:10:19.641 Critical Warnings: 00:10:19.641 Available Spare Space: OK 00:10:19.641 Temperature: OK 00:10:19.641 Device Reliability: OK 00:10:19.641 Read Only: No 00:10:19.641 Volatile Memory Backup: OK 00:10:19.641 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.641 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.641 Available Spare: 0% 00:10:19.641 Available Spare Threshold: 0% 00:10:19.641 Life Percentage Used: 0% 00:10:19.641 Data Units Read: 2689 00:10:19.641 Data Units Written: 2369 00:10:19.641 Host Read Commands: 111170 00:10:19.641 Host Write Commands: 106940 00:10:19.641 Controller Busy Time: 0 minutes 00:10:19.641 Power Cycles: 0 00:10:19.641 Power On Hours: 0 hours 00:10:19.641 Unsafe Shutdowns: 0 00:10:19.641 Unrecoverable Media Errors: 0 00:10:19.641 Lifetime Error Log Entries: 0 00:10:19.641 Warning Temperature Time: 0 minutes 00:10:19.641 Critical Temperature Time: 0 minutes 00:10:19.641 00:10:19.641 Number of Queues 00:10:19.641 ================ 00:10:19.641 Number of I/O Submission Queues: 64 00:10:19.641 Number of I/O Completion Queues: 64 00:10:19.641 00:10:19.641 ZNS Specific Controller Data 00:10:19.641 ============================ 00:10:19.641 Zone Append Size Limit: 0 00:10:19.641 00:10:19.641 00:10:19.641 Active Namespaces 00:10:19.641 ================= 00:10:19.641 Namespace ID:1 00:10:19.641 Error Recovery Timeout: Unlimited 00:10:19.641 Command Set Identifier: NVM (00h) 00:10:19.641 Deallocate: Supported 00:10:19.641 Deallocated/Unwritten Error: Supported 00:10:19.641 Deallocated Read Value: All 0x00 00:10:19.641 Deallocate in Write Zeroes: Not Supported 00:10:19.641 Deallocated Guard Field: 0xFFFF 00:10:19.641 Flush: Supported 00:10:19.641 Reservation: Not Supported 00:10:19.641 Namespace Sharing Capabilities: Private 00:10:19.641 Size (in LBAs): 1048576 (4GiB) 00:10:19.641 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.641 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.641 Thin Provisioning: Not Supported 00:10:19.641 Per-NS Atomic Units: No 00:10:19.641 Maximum Single Source Range Length: 128 00:10:19.641 Maximum Copy Length: 128 00:10:19.641 Maximum Source Range Count: 128 00:10:19.641 NGUID/EUI64 Never Reused: No 00:10:19.641 Namespace Write Protected: No 00:10:19.641 Number of LBA Formats: 8 00:10:19.641 Current LBA Format: LBA Format #04 00:10:19.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.641 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.641 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.641 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.641 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.641 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.641 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.641 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.641 00:10:19.641 Namespace ID:2 00:10:19.641 Error Recovery Timeout: Unlimited 00:10:19.641 Command Set Identifier: NVM (00h) 00:10:19.641 Deallocate: Supported 00:10:19.641 Deallocated/Unwritten Error: Supported 00:10:19.641 Deallocated Read Value: All 0x00 00:10:19.641 Deallocate in Write Zeroes: Not Supported 00:10:19.641 Deallocated Guard Field: 0xFFFF 00:10:19.641 Flush: Supported 00:10:19.641 Reservation: Not Supported 00:10:19.641 Namespace Sharing Capabilities: Private 00:10:19.641 Size (in LBAs): 1048576 (4GiB) 00:10:19.641 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.641 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.641 Thin Provisioning: Not Supported 00:10:19.641 Per-NS Atomic Units: No 00:10:19.641 Maximum Single Source Range Length: 128 00:10:19.641 Maximum Copy Length: 128 00:10:19.641 Maximum Source Range Count: 128 00:10:19.641 NGUID/EUI64 Never Reused: No 00:10:19.641 Namespace Write Protected: No 00:10:19.641 Number of LBA Formats: 8 00:10:19.641 Current LBA Format: LBA Format #04 00:10:19.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.641 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.641 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.641 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.641 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.641 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.641 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.641 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.641 00:10:19.641 Namespace ID:3 00:10:19.641 Error Recovery Timeout: Unlimited 00:10:19.641 Command Set Identifier: NVM (00h) 00:10:19.641 Deallocate: Supported 00:10:19.641 Deallocated/Unwritten Error: Supported 00:10:19.641 Deallocated Read Value: All 0x00 00:10:19.641 Deallocate in Write Zeroes: Not Supported 00:10:19.641 Deallocated Guard Field: 0xFFFF 00:10:19.641 Flush: Supported 00:10:19.641 Reservation: Not Supported 00:10:19.641 Namespace Sharing Capabilities: Private 00:10:19.641 Size (in LBAs): 1048576 (4GiB) 00:10:19.641 Capacity (in LBAs): 1048576 (4GiB) 00:10:19.641 Utilization (in LBAs): 1048576 (4GiB) 00:10:19.641 Thin Provisioning: Not Supported 00:10:19.641 Per-NS Atomic Units: No 00:10:19.641 Maximum Single Source Range Length: 128 00:10:19.641 Maximum Copy Length: 128 00:10:19.641 Maximum Source Range Count: 128 00:10:19.641 NGUID/EUI64 Never Reused: No 00:10:19.641 Namespace Write Protected: No 00:10:19.641 Number of LBA Formats: 8 00:10:19.641 Current LBA Format: LBA Format #04 00:10:19.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.641 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.641 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.641 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.641 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.641 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.641 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.641 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.641 00:10:19.641 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.641 01:20:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:19.898 ===================================================== 00:10:19.898 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.898 ===================================================== 00:10:19.898 Controller Capabilities/Features 00:10:19.898 ================================ 00:10:19.898 Vendor ID: 1b36 00:10:19.898 Subsystem Vendor ID: 1af4 00:10:19.898 Serial Number: 12340 00:10:19.898 Model Number: QEMU NVMe Ctrl 00:10:19.898 Firmware Version: 8.0.0 00:10:19.898 Recommended Arb Burst: 6 00:10:19.898 IEEE OUI Identifier: 00 54 52 00:10:19.898 Multi-path I/O 00:10:19.898 May have multiple subsystem ports: No 00:10:19.898 May have multiple controllers: No 00:10:19.898 Associated with SR-IOV VF: No 00:10:19.898 Max Data Transfer Size: 524288 00:10:19.898 Max Number of Namespaces: 256 00:10:19.898 Max Number of I/O Queues: 64 00:10:19.898 NVMe Specification Version (VS): 1.4 00:10:19.898 NVMe Specification Version (Identify): 1.4 00:10:19.898 Maximum Queue Entries: 2048 00:10:19.898 Contiguous Queues Required: Yes 00:10:19.898 Arbitration Mechanisms Supported 00:10:19.898 Weighted Round Robin: Not Supported 00:10:19.898 Vendor Specific: Not Supported 00:10:19.898 Reset Timeout: 7500 ms 00:10:19.898 Doorbell Stride: 4 bytes 00:10:19.898 NVM Subsystem Reset: Not Supported 00:10:19.898 Command Sets Supported 00:10:19.898 NVM Command Set: Supported 00:10:19.898 Boot Partition: Not Supported 00:10:19.898 Memory Page Size Minimum: 4096 bytes 00:10:19.898 Memory Page Size Maximum: 65536 bytes 00:10:19.898 Persistent Memory Region: Not Supported 00:10:19.898 Optional Asynchronous Events Supported 00:10:19.898 Namespace Attribute Notices: Supported 00:10:19.898 Firmware Activation Notices: Not Supported 00:10:19.898 ANA Change Notices: Not Supported 00:10:19.898 PLE Aggregate Log Change Notices: Not Supported 00:10:19.898 LBA Status Info Alert Notices: Not Supported 00:10:19.898 EGE Aggregate Log Change Notices: Not Supported 00:10:19.898 Normal NVM Subsystem Shutdown event: Not Supported 00:10:19.898 Zone Descriptor Change Notices: Not Supported 00:10:19.898 Discovery Log Change Notices: Not Supported 00:10:19.898 Controller Attributes 00:10:19.898 128-bit Host Identifier: Not Supported 00:10:19.898 Non-Operational Permissive Mode: Not Supported 00:10:19.898 NVM Sets: Not Supported 00:10:19.898 Read Recovery Levels: Not Supported 00:10:19.898 Endurance Groups: Not Supported 00:10:19.898 Predictable Latency Mode: Not Supported 00:10:19.898 Traffic Based Keep ALive: Not Supported 00:10:19.898 Namespace Granularity: Not Supported 00:10:19.898 SQ Associations: Not Supported 00:10:19.898 UUID List: Not Supported 00:10:19.898 Multi-Domain Subsystem: Not Supported 00:10:19.898 Fixed Capacity Management: Not Supported 00:10:19.898 Variable Capacity Management: Not Supported 00:10:19.898 Delete Endurance Group: Not Supported 00:10:19.898 Delete NVM Set: Not Supported 00:10:19.899 Extended LBA Formats Supported: Supported 00:10:19.899 Flexible Data Placement Supported: Not Supported 00:10:19.899 00:10:19.899 Controller Memory Buffer Support 00:10:19.899 ================================ 00:10:19.899 Supported: No 00:10:19.899 00:10:19.899 Persistent Memory Region Support 00:10:19.899 ================================ 00:10:19.899 Supported: No 00:10:19.899 00:10:19.899 Admin Command Set Attributes 00:10:19.899 ============================ 00:10:19.899 Security Send/Receive: Not Supported 00:10:19.899 Format NVM: Supported 00:10:19.899 Firmware Activate/Download: Not Supported 00:10:19.899 Namespace Management: Supported 00:10:19.899 Device Self-Test: Not Supported 00:10:19.899 Directives: Supported 00:10:19.899 NVMe-MI: Not Supported 00:10:19.899 Virtualization Management: Not Supported 00:10:19.899 Doorbell Buffer Config: Supported 00:10:19.899 Get LBA Status Capability: Not Supported 00:10:19.899 Command & Feature Lockdown Capability: Not Supported 00:10:19.899 Abort Command Limit: 4 00:10:19.899 Async Event Request Limit: 4 00:10:19.899 Number of Firmware Slots: N/A 00:10:19.899 Firmware Slot 1 Read-Only: N/A 00:10:19.899 Firmware Activation Without Reset: N/A 00:10:19.899 Multiple Update Detection Support: N/A 00:10:19.899 Firmware Update Granularity: No Information Provided 00:10:19.899 Per-Namespace SMART Log: Yes 00:10:19.899 Asymmetric Namespace Access Log Page: Not Supported 00:10:19.899 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:19.899 Command Effects Log Page: Supported 00:10:19.899 Get Log Page Extended Data: Supported 00:10:19.899 Telemetry Log Pages: Not Supported 00:10:19.899 Persistent Event Log Pages: Not Supported 00:10:19.899 Supported Log Pages Log Page: May Support 00:10:19.899 Commands Supported & Effects Log Page: Not Supported 00:10:19.899 Feature Identifiers & Effects Log Page:May Support 00:10:19.899 NVMe-MI Commands & Effects Log Page: May Support 00:10:19.899 Data Area 4 for Telemetry Log: Not Supported 00:10:19.899 Error Log Page Entries Supported: 1 00:10:19.899 Keep Alive: Not Supported 00:10:19.899 00:10:19.899 NVM Command Set Attributes 00:10:19.899 ========================== 00:10:19.899 Submission Queue Entry Size 00:10:19.899 Max: 64 00:10:19.899 Min: 64 00:10:19.899 Completion Queue Entry Size 00:10:19.899 Max: 16 00:10:19.899 Min: 16 00:10:19.899 Number of Namespaces: 256 00:10:19.899 Compare Command: Supported 00:10:19.899 Write Uncorrectable Command: Not Supported 00:10:19.899 Dataset Management Command: Supported 00:10:19.899 Write Zeroes Command: Supported 00:10:19.899 Set Features Save Field: Supported 00:10:19.899 Reservations: Not Supported 00:10:19.899 Timestamp: Supported 00:10:19.899 Copy: Supported 00:10:19.899 Volatile Write Cache: Present 00:10:19.899 Atomic Write Unit (Normal): 1 00:10:19.899 Atomic Write Unit (PFail): 1 00:10:19.899 Atomic Compare & Write Unit: 1 00:10:19.899 Fused Compare & Write: Not Supported 00:10:19.899 Scatter-Gather List 00:10:19.899 SGL Command Set: Supported 00:10:19.899 SGL Keyed: Not Supported 00:10:19.899 SGL Bit Bucket Descriptor: Not Supported 00:10:19.899 SGL Metadata Pointer: Not Supported 00:10:19.899 Oversized SGL: Not Supported 00:10:19.899 SGL Metadata Address: Not Supported 00:10:19.899 SGL Offset: Not Supported 00:10:19.899 Transport SGL Data Block: Not Supported 00:10:19.899 Replay Protected Memory Block: Not Supported 00:10:19.899 00:10:19.899 Firmware Slot Information 00:10:19.899 ========================= 00:10:19.899 Active slot: 1 00:10:19.899 Slot 1 Firmware Revision: 1.0 00:10:19.899 00:10:19.899 00:10:19.899 Commands Supported and Effects 00:10:19.899 ============================== 00:10:19.899 Admin Commands 00:10:19.899 -------------- 00:10:19.899 Delete I/O Submission Queue (00h): Supported 00:10:19.899 Create I/O Submission Queue (01h): Supported 00:10:19.899 Get Log Page (02h): Supported 00:10:19.899 Delete I/O Completion Queue (04h): Supported 00:10:19.899 Create I/O Completion Queue (05h): Supported 00:10:19.899 Identify (06h): Supported 00:10:19.899 Abort (08h): Supported 00:10:19.899 Set Features (09h): Supported 00:10:19.899 Get Features (0Ah): Supported 00:10:19.899 Asynchronous Event Request (0Ch): Supported 00:10:19.899 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:19.899 Directive Send (19h): Supported 00:10:19.899 Directive Receive (1Ah): Supported 00:10:19.899 Virtualization Management (1Ch): Supported 00:10:19.899 Doorbell Buffer Config (7Ch): Supported 00:10:19.899 Format NVM (80h): Supported LBA-Change 00:10:19.899 I/O Commands 00:10:19.899 ------------ 00:10:19.899 Flush (00h): Supported LBA-Change 00:10:19.899 Write (01h): Supported LBA-Change 00:10:19.899 Read (02h): Supported 00:10:19.899 Compare (05h): Supported 00:10:19.899 Write Zeroes (08h): Supported LBA-Change 00:10:19.899 Dataset Management (09h): Supported LBA-Change 00:10:19.899 Unknown (0Ch): Supported 00:10:19.899 Unknown (12h): Supported 00:10:19.899 Copy (19h): Supported LBA-Change 00:10:19.899 Unknown (1Dh): Supported LBA-Change 00:10:19.899 00:10:19.899 Error Log 00:10:19.899 ========= 00:10:19.899 00:10:19.899 Arbitration 00:10:19.899 =========== 00:10:19.899 Arbitration Burst: no limit 00:10:19.899 00:10:19.899 Power Management 00:10:19.899 ================ 00:10:19.899 Number of Power States: 1 00:10:19.899 Current Power State: Power State #0 00:10:19.899 Power State #0: 00:10:19.899 Max Power: 25.00 W 00:10:19.899 Non-Operational State: Operational 00:10:19.899 Entry Latency: 16 microseconds 00:10:19.899 Exit Latency: 4 microseconds 00:10:19.899 Relative Read Throughput: 0 00:10:19.899 Relative Read Latency: 0 00:10:19.899 Relative Write Throughput: 0 00:10:19.899 Relative Write Latency: 0 00:10:19.899 Idle Power: Not Reported 00:10:19.899 Active Power: Not Reported 00:10:19.899 Non-Operational Permissive Mode: Not Supported 00:10:19.899 00:10:19.899 Health Information 00:10:19.899 ================== 00:10:19.899 Critical Warnings: 00:10:19.899 Available Spare Space: OK 00:10:19.899 Temperature: OK 00:10:19.899 Device Reliability: OK 00:10:19.899 Read Only: No 00:10:19.899 Volatile Memory Backup: OK 00:10:19.899 Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.899 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:19.899 Available Spare: 0% 00:10:19.899 Available Spare Threshold: 0% 00:10:19.899 Life Percentage Used: 0% 00:10:19.899 Data Units Read: 1185 00:10:19.899 Data Units Written: 1023 00:10:19.899 Host Read Commands: 50875 00:10:19.899 Host Write Commands: 49464 00:10:19.899 Controller Busy Time: 0 minutes 00:10:19.899 Power Cycles: 0 00:10:19.899 Power On Hours: 0 hours 00:10:19.899 Unsafe Shutdowns: 0 00:10:19.899 Unrecoverable Media Errors: 0 00:10:19.899 Lifetime Error Log Entries: 0 00:10:19.899 Warning Temperature Time: 0 minutes 00:10:19.899 Critical Temperature Time: 0 minutes 00:10:19.899 00:10:19.899 Number of Queues 00:10:19.899 ================ 00:10:19.899 Number of I/O Submission Queues: 64 00:10:19.899 Number of I/O Completion Queues: 64 00:10:19.899 00:10:19.899 ZNS Specific Controller Data 00:10:19.899 ============================ 00:10:19.899 Zone Append Size Limit: 0 00:10:19.899 00:10:19.899 00:10:19.899 Active Namespaces 00:10:19.899 ================= 00:10:19.899 Namespace ID:1 00:10:19.899 Error Recovery Timeout: Unlimited 00:10:19.899 Command Set Identifier: NVM (00h) 00:10:19.899 Deallocate: Supported 00:10:19.899 Deallocated/Unwritten Error: Supported 00:10:19.899 Deallocated Read Value: All 0x00 00:10:19.899 Deallocate in Write Zeroes: Not Supported 00:10:19.899 Deallocated Guard Field: 0xFFFF 00:10:19.899 Flush: Supported 00:10:19.899 Reservation: Not Supported 00:10:19.899 Metadata Transferred as: Separate Metadata Buffer 00:10:19.899 Namespace Sharing Capabilities: Private 00:10:19.899 Size (in LBAs): 1548666 (5GiB) 00:10:19.899 Capacity (in LBAs): 1548666 (5GiB) 00:10:19.899 Utilization (in LBAs): 1548666 (5GiB) 00:10:19.899 Thin Provisioning: Not Supported 00:10:19.899 Per-NS Atomic Units: No 00:10:19.899 Maximum Single Source Range Length: 128 00:10:19.899 Maximum Copy Length: 128 00:10:19.899 Maximum Source Range Count: 128 00:10:19.899 NGUID/EUI64 Never Reused: No 00:10:19.899 Namespace Write Protected: No 00:10:19.899 Number of LBA Formats: 8 00:10:19.899 Current LBA Format: LBA Format #07 00:10:19.899 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:19.899 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:19.899 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:19.899 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:19.899 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:19.899 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:19.899 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:19.899 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:19.899 00:10:19.899 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:19.899 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:20.158 ===================================================== 00:10:20.158 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:20.158 ===================================================== 00:10:20.158 Controller Capabilities/Features 00:10:20.158 ================================ 00:10:20.158 Vendor ID: 1b36 00:10:20.158 Subsystem Vendor ID: 1af4 00:10:20.158 Serial Number: 12341 00:10:20.158 Model Number: QEMU NVMe Ctrl 00:10:20.158 Firmware Version: 8.0.0 00:10:20.158 Recommended Arb Burst: 6 00:10:20.158 IEEE OUI Identifier: 00 54 52 00:10:20.158 Multi-path I/O 00:10:20.158 May have multiple subsystem ports: No 00:10:20.158 May have multiple controllers: No 00:10:20.158 Associated with SR-IOV VF: No 00:10:20.158 Max Data Transfer Size: 524288 00:10:20.158 Max Number of Namespaces: 256 00:10:20.158 Max Number of I/O Queues: 64 00:10:20.158 NVMe Specification Version (VS): 1.4 00:10:20.158 NVMe Specification Version (Identify): 1.4 00:10:20.158 Maximum Queue Entries: 2048 00:10:20.158 Contiguous Queues Required: Yes 00:10:20.158 Arbitration Mechanisms Supported 00:10:20.158 Weighted Round Robin: Not Supported 00:10:20.158 Vendor Specific: Not Supported 00:10:20.158 Reset Timeout: 7500 ms 00:10:20.158 Doorbell Stride: 4 bytes 00:10:20.158 NVM Subsystem Reset: Not Supported 00:10:20.158 Command Sets Supported 00:10:20.158 NVM Command Set: Supported 00:10:20.158 Boot Partition: Not Supported 00:10:20.158 Memory Page Size Minimum: 4096 bytes 00:10:20.158 Memory Page Size Maximum: 65536 bytes 00:10:20.158 Persistent Memory Region: Not Supported 00:10:20.158 Optional Asynchronous Events Supported 00:10:20.158 Namespace Attribute Notices: Supported 00:10:20.158 Firmware Activation Notices: Not Supported 00:10:20.158 ANA Change Notices: Not Supported 00:10:20.158 PLE Aggregate Log Change Notices: Not Supported 00:10:20.158 LBA Status Info Alert Notices: Not Supported 00:10:20.158 EGE Aggregate Log Change Notices: Not Supported 00:10:20.158 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.158 Zone Descriptor Change Notices: Not Supported 00:10:20.158 Discovery Log Change Notices: Not Supported 00:10:20.158 Controller Attributes 00:10:20.158 128-bit Host Identifier: Not Supported 00:10:20.158 Non-Operational Permissive Mode: Not Supported 00:10:20.158 NVM Sets: Not Supported 00:10:20.158 Read Recovery Levels: Not Supported 00:10:20.158 Endurance Groups: Not Supported 00:10:20.158 Predictable Latency Mode: Not Supported 00:10:20.158 Traffic Based Keep ALive: Not Supported 00:10:20.158 Namespace Granularity: Not Supported 00:10:20.158 SQ Associations: Not Supported 00:10:20.158 UUID List: Not Supported 00:10:20.158 Multi-Domain Subsystem: Not Supported 00:10:20.158 Fixed Capacity Management: Not Supported 00:10:20.158 Variable Capacity Management: Not Supported 00:10:20.158 Delete Endurance Group: Not Supported 00:10:20.158 Delete NVM Set: Not Supported 00:10:20.158 Extended LBA Formats Supported: Supported 00:10:20.158 Flexible Data Placement Supported: Not Supported 00:10:20.158 00:10:20.158 Controller Memory Buffer Support 00:10:20.158 ================================ 00:10:20.158 Supported: No 00:10:20.158 00:10:20.158 Persistent Memory Region Support 00:10:20.158 ================================ 00:10:20.158 Supported: No 00:10:20.158 00:10:20.158 Admin Command Set Attributes 00:10:20.158 ============================ 00:10:20.158 Security Send/Receive: Not Supported 00:10:20.158 Format NVM: Supported 00:10:20.158 Firmware Activate/Download: Not Supported 00:10:20.158 Namespace Management: Supported 00:10:20.158 Device Self-Test: Not Supported 00:10:20.158 Directives: Supported 00:10:20.158 NVMe-MI: Not Supported 00:10:20.158 Virtualization Management: Not Supported 00:10:20.158 Doorbell Buffer Config: Supported 00:10:20.158 Get LBA Status Capability: Not Supported 00:10:20.158 Command & Feature Lockdown Capability: Not Supported 00:10:20.158 Abort Command Limit: 4 00:10:20.158 Async Event Request Limit: 4 00:10:20.158 Number of Firmware Slots: N/A 00:10:20.159 Firmware Slot 1 Read-Only: N/A 00:10:20.159 Firmware Activation Without Reset: N/A 00:10:20.159 Multiple Update Detection Support: N/A 00:10:20.159 Firmware Update Granularity: No Information Provided 00:10:20.159 Per-Namespace SMART Log: Yes 00:10:20.159 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.159 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:20.159 Command Effects Log Page: Supported 00:10:20.159 Get Log Page Extended Data: Supported 00:10:20.159 Telemetry Log Pages: Not Supported 00:10:20.159 Persistent Event Log Pages: Not Supported 00:10:20.159 Supported Log Pages Log Page: May Support 00:10:20.159 Commands Supported & Effects Log Page: Not Supported 00:10:20.159 Feature Identifiers & Effects Log Page:May Support 00:10:20.159 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.159 Data Area 4 for Telemetry Log: Not Supported 00:10:20.159 Error Log Page Entries Supported: 1 00:10:20.159 Keep Alive: Not Supported 00:10:20.159 00:10:20.159 NVM Command Set Attributes 00:10:20.159 ========================== 00:10:20.159 Submission Queue Entry Size 00:10:20.159 Max: 64 00:10:20.159 Min: 64 00:10:20.159 Completion Queue Entry Size 00:10:20.159 Max: 16 00:10:20.159 Min: 16 00:10:20.159 Number of Namespaces: 256 00:10:20.159 Compare Command: Supported 00:10:20.159 Write Uncorrectable Command: Not Supported 00:10:20.159 Dataset Management Command: Supported 00:10:20.159 Write Zeroes Command: Supported 00:10:20.159 Set Features Save Field: Supported 00:10:20.159 Reservations: Not Supported 00:10:20.159 Timestamp: Supported 00:10:20.159 Copy: Supported 00:10:20.159 Volatile Write Cache: Present 00:10:20.159 Atomic Write Unit (Normal): 1 00:10:20.159 Atomic Write Unit (PFail): 1 00:10:20.159 Atomic Compare & Write Unit: 1 00:10:20.159 Fused Compare & Write: Not Supported 00:10:20.159 Scatter-Gather List 00:10:20.159 SGL Command Set: Supported 00:10:20.159 SGL Keyed: Not Supported 00:10:20.159 SGL Bit Bucket Descriptor: Not Supported 00:10:20.159 SGL Metadata Pointer: Not Supported 00:10:20.159 Oversized SGL: Not Supported 00:10:20.159 SGL Metadata Address: Not Supported 00:10:20.159 SGL Offset: Not Supported 00:10:20.159 Transport SGL Data Block: Not Supported 00:10:20.159 Replay Protected Memory Block: Not Supported 00:10:20.159 00:10:20.159 Firmware Slot Information 00:10:20.159 ========================= 00:10:20.159 Active slot: 1 00:10:20.159 Slot 1 Firmware Revision: 1.0 00:10:20.159 00:10:20.159 00:10:20.159 Commands Supported and Effects 00:10:20.159 ============================== 00:10:20.159 Admin Commands 00:10:20.159 -------------- 00:10:20.159 Delete I/O Submission Queue (00h): Supported 00:10:20.159 Create I/O Submission Queue (01h): Supported 00:10:20.159 Get Log Page (02h): Supported 00:10:20.159 Delete I/O Completion Queue (04h): Supported 00:10:20.159 Create I/O Completion Queue (05h): Supported 00:10:20.159 Identify (06h): Supported 00:10:20.159 Abort (08h): Supported 00:10:20.159 Set Features (09h): Supported 00:10:20.159 Get Features (0Ah): Supported 00:10:20.159 Asynchronous Event Request (0Ch): Supported 00:10:20.159 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.159 Directive Send (19h): Supported 00:10:20.159 Directive Receive (1Ah): Supported 00:10:20.159 Virtualization Management (1Ch): Supported 00:10:20.159 Doorbell Buffer Config (7Ch): Supported 00:10:20.159 Format NVM (80h): Supported LBA-Change 00:10:20.159 I/O Commands 00:10:20.159 ------------ 00:10:20.159 Flush (00h): Supported LBA-Change 00:10:20.159 Write (01h): Supported LBA-Change 00:10:20.159 Read (02h): Supported 00:10:20.159 Compare (05h): Supported 00:10:20.159 Write Zeroes (08h): Supported LBA-Change 00:10:20.159 Dataset Management (09h): Supported LBA-Change 00:10:20.159 Unknown (0Ch): Supported 00:10:20.159 Unknown (12h): Supported 00:10:20.159 Copy (19h): Supported LBA-Change 00:10:20.159 Unknown (1Dh): Supported LBA-Change 00:10:20.159 00:10:20.159 Error Log 00:10:20.159 ========= 00:10:20.159 00:10:20.159 Arbitration 00:10:20.159 =========== 00:10:20.159 Arbitration Burst: no limit 00:10:20.159 00:10:20.159 Power Management 00:10:20.159 ================ 00:10:20.159 Number of Power States: 1 00:10:20.159 Current Power State: Power State #0 00:10:20.159 Power State #0: 00:10:20.159 Max Power: 25.00 W 00:10:20.159 Non-Operational State: Operational 00:10:20.159 Entry Latency: 16 microseconds 00:10:20.159 Exit Latency: 4 microseconds 00:10:20.159 Relative Read Throughput: 0 00:10:20.159 Relative Read Latency: 0 00:10:20.159 Relative Write Throughput: 0 00:10:20.159 Relative Write Latency: 0 00:10:20.159 Idle Power: Not Reported 00:10:20.159 Active Power: Not Reported 00:10:20.159 Non-Operational Permissive Mode: Not Supported 00:10:20.159 00:10:20.159 Health Information 00:10:20.159 ================== 00:10:20.159 Critical Warnings: 00:10:20.159 Available Spare Space: OK 00:10:20.159 Temperature: OK 00:10:20.159 Device Reliability: OK 00:10:20.159 Read Only: No 00:10:20.159 Volatile Memory Backup: OK 00:10:20.159 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.159 Available Spare: 0% 00:10:20.159 Available Spare Threshold: 0% 00:10:20.159 Life Percentage Used: 0% 00:10:20.159 Data Units Read: 894 00:10:20.159 Data Units Written: 743 00:10:20.159 Host Read Commands: 37521 00:10:20.159 Host Write Commands: 35253 00:10:20.159 Controller Busy Time: 0 minutes 00:10:20.159 Power Cycles: 0 00:10:20.159 Power On Hours: 0 hours 00:10:20.159 Unsafe Shutdowns: 0 00:10:20.159 Unrecoverable Media Errors: 0 00:10:20.159 Lifetime Error Log Entries: 0 00:10:20.159 Warning Temperature Time: 0 minutes 00:10:20.159 Critical Temperature Time: 0 minutes 00:10:20.159 00:10:20.159 Number of Queues 00:10:20.159 ================ 00:10:20.159 Number of I/O Submission Queues: 64 00:10:20.159 Number of I/O Completion Queues: 64 00:10:20.159 00:10:20.159 ZNS Specific Controller Data 00:10:20.159 ============================ 00:10:20.159 Zone Append Size Limit: 0 00:10:20.159 00:10:20.159 00:10:20.159 Active Namespaces 00:10:20.159 ================= 00:10:20.159 Namespace ID:1 00:10:20.159 Error Recovery Timeout: Unlimited 00:10:20.159 Command Set Identifier: NVM (00h) 00:10:20.159 Deallocate: Supported 00:10:20.159 Deallocated/Unwritten Error: Supported 00:10:20.159 Deallocated Read Value: All 0x00 00:10:20.159 Deallocate in Write Zeroes: Not Supported 00:10:20.159 Deallocated Guard Field: 0xFFFF 00:10:20.159 Flush: Supported 00:10:20.159 Reservation: Not Supported 00:10:20.159 Namespace Sharing Capabilities: Private 00:10:20.159 Size (in LBAs): 1310720 (5GiB) 00:10:20.159 Capacity (in LBAs): 1310720 (5GiB) 00:10:20.159 Utilization (in LBAs): 1310720 (5GiB) 00:10:20.159 Thin Provisioning: Not Supported 00:10:20.159 Per-NS Atomic Units: No 00:10:20.159 Maximum Single Source Range Length: 128 00:10:20.159 Maximum Copy Length: 128 00:10:20.159 Maximum Source Range Count: 128 00:10:20.159 NGUID/EUI64 Never Reused: No 00:10:20.159 Namespace Write Protected: No 00:10:20.159 Number of LBA Formats: 8 00:10:20.159 Current LBA Format: LBA Format #04 00:10:20.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.159 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.159 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.159 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.159 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.159 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.159 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.159 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.159 00:10:20.159 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:20.159 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:20.419 ===================================================== 00:10:20.419 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:20.419 ===================================================== 00:10:20.419 Controller Capabilities/Features 00:10:20.419 ================================ 00:10:20.419 Vendor ID: 1b36 00:10:20.419 Subsystem Vendor ID: 1af4 00:10:20.419 Serial Number: 12342 00:10:20.419 Model Number: QEMU NVMe Ctrl 00:10:20.419 Firmware Version: 8.0.0 00:10:20.419 Recommended Arb Burst: 6 00:10:20.419 IEEE OUI Identifier: 00 54 52 00:10:20.419 Multi-path I/O 00:10:20.419 May have multiple subsystem ports: No 00:10:20.419 May have multiple controllers: No 00:10:20.419 Associated with SR-IOV VF: No 00:10:20.419 Max Data Transfer Size: 524288 00:10:20.419 Max Number of Namespaces: 256 00:10:20.419 Max Number of I/O Queues: 64 00:10:20.419 NVMe Specification Version (VS): 1.4 00:10:20.419 NVMe Specification Version (Identify): 1.4 00:10:20.419 Maximum Queue Entries: 2048 00:10:20.419 Contiguous Queues Required: Yes 00:10:20.419 Arbitration Mechanisms Supported 00:10:20.419 Weighted Round Robin: Not Supported 00:10:20.419 Vendor Specific: Not Supported 00:10:20.419 Reset Timeout: 7500 ms 00:10:20.419 Doorbell Stride: 4 bytes 00:10:20.419 NVM Subsystem Reset: Not Supported 00:10:20.419 Command Sets Supported 00:10:20.419 NVM Command Set: Supported 00:10:20.419 Boot Partition: Not Supported 00:10:20.419 Memory Page Size Minimum: 4096 bytes 00:10:20.419 Memory Page Size Maximum: 65536 bytes 00:10:20.419 Persistent Memory Region: Not Supported 00:10:20.419 Optional Asynchronous Events Supported 00:10:20.419 Namespace Attribute Notices: Supported 00:10:20.419 Firmware Activation Notices: Not Supported 00:10:20.419 ANA Change Notices: Not Supported 00:10:20.419 PLE Aggregate Log Change Notices: Not Supported 00:10:20.419 LBA Status Info Alert Notices: Not Supported 00:10:20.419 EGE Aggregate Log Change Notices: Not Supported 00:10:20.419 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.419 Zone Descriptor Change Notices: Not Supported 00:10:20.419 Discovery Log Change Notices: Not Supported 00:10:20.419 Controller Attributes 00:10:20.419 128-bit Host Identifier: Not Supported 00:10:20.419 Non-Operational Permissive Mode: Not Supported 00:10:20.419 NVM Sets: Not Supported 00:10:20.419 Read Recovery Levels: Not Supported 00:10:20.419 Endurance Groups: Not Supported 00:10:20.419 Predictable Latency Mode: Not Supported 00:10:20.419 Traffic Based Keep ALive: Not Supported 00:10:20.419 Namespace Granularity: Not Supported 00:10:20.419 SQ Associations: Not Supported 00:10:20.419 UUID List: Not Supported 00:10:20.419 Multi-Domain Subsystem: Not Supported 00:10:20.419 Fixed Capacity Management: Not Supported 00:10:20.419 Variable Capacity Management: Not Supported 00:10:20.419 Delete Endurance Group: Not Supported 00:10:20.419 Delete NVM Set: Not Supported 00:10:20.419 Extended LBA Formats Supported: Supported 00:10:20.419 Flexible Data Placement Supported: Not Supported 00:10:20.419 00:10:20.419 Controller Memory Buffer Support 00:10:20.419 ================================ 00:10:20.419 Supported: No 00:10:20.419 00:10:20.419 Persistent Memory Region Support 00:10:20.419 ================================ 00:10:20.419 Supported: No 00:10:20.419 00:10:20.419 Admin Command Set Attributes 00:10:20.419 ============================ 00:10:20.419 Security Send/Receive: Not Supported 00:10:20.419 Format NVM: Supported 00:10:20.419 Firmware Activate/Download: Not Supported 00:10:20.419 Namespace Management: Supported 00:10:20.419 Device Self-Test: Not Supported 00:10:20.419 Directives: Supported 00:10:20.419 NVMe-MI: Not Supported 00:10:20.419 Virtualization Management: Not Supported 00:10:20.419 Doorbell Buffer Config: Supported 00:10:20.419 Get LBA Status Capability: Not Supported 00:10:20.419 Command & Feature Lockdown Capability: Not Supported 00:10:20.419 Abort Command Limit: 4 00:10:20.419 Async Event Request Limit: 4 00:10:20.419 Number of Firmware Slots: N/A 00:10:20.419 Firmware Slot 1 Read-Only: N/A 00:10:20.419 Firmware Activation Without Reset: N/A 00:10:20.419 Multiple Update Detection Support: N/A 00:10:20.419 Firmware Update Granularity: No Information Provided 00:10:20.419 Per-Namespace SMART Log: Yes 00:10:20.419 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.419 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:20.419 Command Effects Log Page: Supported 00:10:20.419 Get Log Page Extended Data: Supported 00:10:20.419 Telemetry Log Pages: Not Supported 00:10:20.419 Persistent Event Log Pages: Not Supported 00:10:20.419 Supported Log Pages Log Page: May Support 00:10:20.419 Commands Supported & Effects Log Page: Not Supported 00:10:20.419 Feature Identifiers & Effects Log Page:May Support 00:10:20.419 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.419 Data Area 4 for Telemetry Log: Not Supported 00:10:20.419 Error Log Page Entries Supported: 1 00:10:20.419 Keep Alive: Not Supported 00:10:20.419 00:10:20.419 NVM Command Set Attributes 00:10:20.419 ========================== 00:10:20.419 Submission Queue Entry Size 00:10:20.419 Max: 64 00:10:20.419 Min: 64 00:10:20.419 Completion Queue Entry Size 00:10:20.419 Max: 16 00:10:20.419 Min: 16 00:10:20.419 Number of Namespaces: 256 00:10:20.419 Compare Command: Supported 00:10:20.419 Write Uncorrectable Command: Not Supported 00:10:20.419 Dataset Management Command: Supported 00:10:20.419 Write Zeroes Command: Supported 00:10:20.419 Set Features Save Field: Supported 00:10:20.419 Reservations: Not Supported 00:10:20.419 Timestamp: Supported 00:10:20.419 Copy: Supported 00:10:20.419 Volatile Write Cache: Present 00:10:20.419 Atomic Write Unit (Normal): 1 00:10:20.419 Atomic Write Unit (PFail): 1 00:10:20.419 Atomic Compare & Write Unit: 1 00:10:20.419 Fused Compare & Write: Not Supported 00:10:20.419 Scatter-Gather List 00:10:20.419 SGL Command Set: Supported 00:10:20.419 SGL Keyed: Not Supported 00:10:20.419 SGL Bit Bucket Descriptor: Not Supported 00:10:20.419 SGL Metadata Pointer: Not Supported 00:10:20.419 Oversized SGL: Not Supported 00:10:20.419 SGL Metadata Address: Not Supported 00:10:20.419 SGL Offset: Not Supported 00:10:20.419 Transport SGL Data Block: Not Supported 00:10:20.419 Replay Protected Memory Block: Not Supported 00:10:20.419 00:10:20.419 Firmware Slot Information 00:10:20.419 ========================= 00:10:20.419 Active slot: 1 00:10:20.419 Slot 1 Firmware Revision: 1.0 00:10:20.419 00:10:20.419 00:10:20.419 Commands Supported and Effects 00:10:20.419 ============================== 00:10:20.419 Admin Commands 00:10:20.419 -------------- 00:10:20.419 Delete I/O Submission Queue (00h): Supported 00:10:20.419 Create I/O Submission Queue (01h): Supported 00:10:20.419 Get Log Page (02h): Supported 00:10:20.419 Delete I/O Completion Queue (04h): Supported 00:10:20.419 Create I/O Completion Queue (05h): Supported 00:10:20.419 Identify (06h): Supported 00:10:20.419 Abort (08h): Supported 00:10:20.419 Set Features (09h): Supported 00:10:20.419 Get Features (0Ah): Supported 00:10:20.419 Asynchronous Event Request (0Ch): Supported 00:10:20.419 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.419 Directive Send (19h): Supported 00:10:20.419 Directive Receive (1Ah): Supported 00:10:20.419 Virtualization Management (1Ch): Supported 00:10:20.419 Doorbell Buffer Config (7Ch): Supported 00:10:20.419 Format NVM (80h): Supported LBA-Change 00:10:20.419 I/O Commands 00:10:20.419 ------------ 00:10:20.419 Flush (00h): Supported LBA-Change 00:10:20.419 Write (01h): Supported LBA-Change 00:10:20.419 Read (02h): Supported 00:10:20.419 Compare (05h): Supported 00:10:20.419 Write Zeroes (08h): Supported LBA-Change 00:10:20.419 Dataset Management (09h): Supported LBA-Change 00:10:20.419 Unknown (0Ch): Supported 00:10:20.419 Unknown (12h): Supported 00:10:20.419 Copy (19h): Supported LBA-Change 00:10:20.419 Unknown (1Dh): Supported LBA-Change 00:10:20.419 00:10:20.419 Error Log 00:10:20.419 ========= 00:10:20.419 00:10:20.419 Arbitration 00:10:20.419 =========== 00:10:20.419 Arbitration Burst: no limit 00:10:20.419 00:10:20.419 Power Management 00:10:20.419 ================ 00:10:20.419 Number of Power States: 1 00:10:20.419 Current Power State: Power State #0 00:10:20.419 Power State #0: 00:10:20.420 Max Power: 25.00 W 00:10:20.420 Non-Operational State: Operational 00:10:20.420 Entry Latency: 16 microseconds 00:10:20.420 Exit Latency: 4 microseconds 00:10:20.420 Relative Read Throughput: 0 00:10:20.420 Relative Read Latency: 0 00:10:20.420 Relative Write Throughput: 0 00:10:20.420 Relative Write Latency: 0 00:10:20.420 Idle Power: Not Reported 00:10:20.420 Active Power: Not Reported 00:10:20.420 Non-Operational Permissive Mode: Not Supported 00:10:20.420 00:10:20.420 Health Information 00:10:20.420 ================== 00:10:20.420 Critical Warnings: 00:10:20.420 Available Spare Space: OK 00:10:20.420 Temperature: OK 00:10:20.420 Device Reliability: OK 00:10:20.420 Read Only: No 00:10:20.420 Volatile Memory Backup: OK 00:10:20.420 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.420 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.420 Available Spare: 0% 00:10:20.420 Available Spare Threshold: 0% 00:10:20.420 Life Percentage Used: 0% 00:10:20.420 Data Units Read: 2689 00:10:20.420 Data Units Written: 2369 00:10:20.420 Host Read Commands: 111170 00:10:20.420 Host Write Commands: 106940 00:10:20.420 Controller Busy Time: 0 minutes 00:10:20.420 Power Cycles: 0 00:10:20.420 Power On Hours: 0 hours 00:10:20.420 Unsafe Shutdowns: 0 00:10:20.420 Unrecoverable Media Errors: 0 00:10:20.420 Lifetime Error Log Entries: 0 00:10:20.420 Warning Temperature Time: 0 minutes 00:10:20.420 Critical Temperature Time: 0 minutes 00:10:20.420 00:10:20.420 Number of Queues 00:10:20.420 ================ 00:10:20.420 Number of I/O Submission Queues: 64 00:10:20.420 Number of I/O Completion Queues: 64 00:10:20.420 00:10:20.420 ZNS Specific Controller Data 00:10:20.420 ============================ 00:10:20.420 Zone Append Size Limit: 0 00:10:20.420 00:10:20.420 00:10:20.420 Active Namespaces 00:10:20.420 ================= 00:10:20.420 Namespace ID:1 00:10:20.420 Error Recovery Timeout: Unlimited 00:10:20.420 Command Set Identifier: NVM (00h) 00:10:20.420 Deallocate: Supported 00:10:20.420 Deallocated/Unwritten Error: Supported 00:10:20.420 Deallocated Read Value: All 0x00 00:10:20.420 Deallocate in Write Zeroes: Not Supported 00:10:20.420 Deallocated Guard Field: 0xFFFF 00:10:20.420 Flush: Supported 00:10:20.420 Reservation: Not Supported 00:10:20.420 Namespace Sharing Capabilities: Private 00:10:20.420 Size (in LBAs): 1048576 (4GiB) 00:10:20.420 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.420 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.420 Thin Provisioning: Not Supported 00:10:20.420 Per-NS Atomic Units: No 00:10:20.420 Maximum Single Source Range Length: 128 00:10:20.420 Maximum Copy Length: 128 00:10:20.420 Maximum Source Range Count: 128 00:10:20.420 NGUID/EUI64 Never Reused: No 00:10:20.420 Namespace Write Protected: No 00:10:20.420 Number of LBA Formats: 8 00:10:20.420 Current LBA Format: LBA Format #04 00:10:20.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.420 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.420 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.420 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.420 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.420 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.420 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.420 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.420 00:10:20.420 Namespace ID:2 00:10:20.420 Error Recovery Timeout: Unlimited 00:10:20.420 Command Set Identifier: NVM (00h) 00:10:20.420 Deallocate: Supported 00:10:20.420 Deallocated/Unwritten Error: Supported 00:10:20.420 Deallocated Read Value: All 0x00 00:10:20.420 Deallocate in Write Zeroes: Not Supported 00:10:20.420 Deallocated Guard Field: 0xFFFF 00:10:20.420 Flush: Supported 00:10:20.420 Reservation: Not Supported 00:10:20.420 Namespace Sharing Capabilities: Private 00:10:20.420 Size (in LBAs): 1048576 (4GiB) 00:10:20.420 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.420 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.420 Thin Provisioning: Not Supported 00:10:20.420 Per-NS Atomic Units: No 00:10:20.420 Maximum Single Source Range Length: 128 00:10:20.420 Maximum Copy Length: 128 00:10:20.420 Maximum Source Range Count: 128 00:10:20.420 NGUID/EUI64 Never Reused: No 00:10:20.420 Namespace Write Protected: No 00:10:20.420 Number of LBA Formats: 8 00:10:20.420 Current LBA Format: LBA Format #04 00:10:20.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.420 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.420 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.420 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.420 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.420 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.420 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.420 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.420 00:10:20.420 Namespace ID:3 00:10:20.420 Error Recovery Timeout: Unlimited 00:10:20.420 Command Set Identifier: NVM (00h) 00:10:20.420 Deallocate: Supported 00:10:20.420 Deallocated/Unwritten Error: Supported 00:10:20.420 Deallocated Read Value: All 0x00 00:10:20.420 Deallocate in Write Zeroes: Not Supported 00:10:20.420 Deallocated Guard Field: 0xFFFF 00:10:20.420 Flush: Supported 00:10:20.420 Reservation: Not Supported 00:10:20.420 Namespace Sharing Capabilities: Private 00:10:20.420 Size (in LBAs): 1048576 (4GiB) 00:10:20.420 Capacity (in LBAs): 1048576 (4GiB) 00:10:20.420 Utilization (in LBAs): 1048576 (4GiB) 00:10:20.420 Thin Provisioning: Not Supported 00:10:20.420 Per-NS Atomic Units: No 00:10:20.420 Maximum Single Source Range Length: 128 00:10:20.420 Maximum Copy Length: 128 00:10:20.420 Maximum Source Range Count: 128 00:10:20.420 NGUID/EUI64 Never Reused: No 00:10:20.420 Namespace Write Protected: No 00:10:20.420 Number of LBA Formats: 8 00:10:20.420 Current LBA Format: LBA Format #04 00:10:20.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.420 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.420 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.420 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.420 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.420 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.420 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.420 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.420 00:10:20.420 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:20.420 01:20:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:20.679 ===================================================== 00:10:20.679 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:20.679 ===================================================== 00:10:20.679 Controller Capabilities/Features 00:10:20.679 ================================ 00:10:20.679 Vendor ID: 1b36 00:10:20.679 Subsystem Vendor ID: 1af4 00:10:20.679 Serial Number: 12343 00:10:20.679 Model Number: QEMU NVMe Ctrl 00:10:20.679 Firmware Version: 8.0.0 00:10:20.679 Recommended Arb Burst: 6 00:10:20.679 IEEE OUI Identifier: 00 54 52 00:10:20.679 Multi-path I/O 00:10:20.679 May have multiple subsystem ports: No 00:10:20.679 May have multiple controllers: Yes 00:10:20.679 Associated with SR-IOV VF: No 00:10:20.679 Max Data Transfer Size: 524288 00:10:20.679 Max Number of Namespaces: 256 00:10:20.679 Max Number of I/O Queues: 64 00:10:20.679 NVMe Specification Version (VS): 1.4 00:10:20.679 NVMe Specification Version (Identify): 1.4 00:10:20.679 Maximum Queue Entries: 2048 00:10:20.679 Contiguous Queues Required: Yes 00:10:20.679 Arbitration Mechanisms Supported 00:10:20.679 Weighted Round Robin: Not Supported 00:10:20.679 Vendor Specific: Not Supported 00:10:20.679 Reset Timeout: 7500 ms 00:10:20.679 Doorbell Stride: 4 bytes 00:10:20.679 NVM Subsystem Reset: Not Supported 00:10:20.679 Command Sets Supported 00:10:20.679 NVM Command Set: Supported 00:10:20.679 Boot Partition: Not Supported 00:10:20.679 Memory Page Size Minimum: 4096 bytes 00:10:20.679 Memory Page Size Maximum: 65536 bytes 00:10:20.679 Persistent Memory Region: Not Supported 00:10:20.679 Optional Asynchronous Events Supported 00:10:20.679 Namespace Attribute Notices: Supported 00:10:20.679 Firmware Activation Notices: Not Supported 00:10:20.679 ANA Change Notices: Not Supported 00:10:20.679 PLE Aggregate Log Change Notices: Not Supported 00:10:20.679 LBA Status Info Alert Notices: Not Supported 00:10:20.679 EGE Aggregate Log Change Notices: Not Supported 00:10:20.679 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.679 Zone Descriptor Change Notices: Not Supported 00:10:20.679 Discovery Log Change Notices: Not Supported 00:10:20.679 Controller Attributes 00:10:20.679 128-bit Host Identifier: Not Supported 00:10:20.679 Non-Operational Permissive Mode: Not Supported 00:10:20.679 NVM Sets: Not Supported 00:10:20.679 Read Recovery Levels: Not Supported 00:10:20.679 Endurance Groups: Supported 00:10:20.679 Predictable Latency Mode: Not Supported 00:10:20.679 Traffic Based Keep ALive: Not Supported 00:10:20.679 Namespace Granularity: Not Supported 00:10:20.679 SQ Associations: Not Supported 00:10:20.679 UUID List: Not Supported 00:10:20.679 Multi-Domain Subsystem: Not Supported 00:10:20.679 Fixed Capacity Management: Not Supported 00:10:20.679 Variable Capacity Management: Not Supported 00:10:20.679 Delete Endurance Group: Not Supported 00:10:20.679 Delete NVM Set: Not Supported 00:10:20.679 Extended LBA Formats Supported: Supported 00:10:20.679 Flexible Data Placement Supported: Supported 00:10:20.679 00:10:20.679 Controller Memory Buffer Support 00:10:20.679 ================================ 00:10:20.679 Supported: No 00:10:20.679 00:10:20.679 Persistent Memory Region Support 00:10:20.679 ================================ 00:10:20.679 Supported: No 00:10:20.679 00:10:20.679 Admin Command Set Attributes 00:10:20.679 ============================ 00:10:20.679 Security Send/Receive: Not Supported 00:10:20.679 Format NVM: Supported 00:10:20.679 Firmware Activate/Download: Not Supported 00:10:20.679 Namespace Management: Supported 00:10:20.679 Device Self-Test: Not Supported 00:10:20.679 Directives: Supported 00:10:20.679 NVMe-MI: Not Supported 00:10:20.679 Virtualization Management: Not Supported 00:10:20.679 Doorbell Buffer Config: Supported 00:10:20.679 Get LBA Status Capability: Not Supported 00:10:20.679 Command & Feature Lockdown Capability: Not Supported 00:10:20.679 Abort Command Limit: 4 00:10:20.679 Async Event Request Limit: 4 00:10:20.679 Number of Firmware Slots: N/A 00:10:20.679 Firmware Slot 1 Read-Only: N/A 00:10:20.679 Firmware Activation Without Reset: N/A 00:10:20.679 Multiple Update Detection Support: N/A 00:10:20.679 Firmware Update Granularity: No Information Provided 00:10:20.679 Per-Namespace SMART Log: Yes 00:10:20.679 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.679 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:20.679 Command Effects Log Page: Supported 00:10:20.679 Get Log Page Extended Data: Supported 00:10:20.679 Telemetry Log Pages: Not Supported 00:10:20.679 Persistent Event Log Pages: Not Supported 00:10:20.679 Supported Log Pages Log Page: May Support 00:10:20.679 Commands Supported & Effects Log Page: Not Supported 00:10:20.679 Feature Identifiers & Effects Log Page:May Support 00:10:20.679 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.679 Data Area 4 for Telemetry Log: Not Supported 00:10:20.679 Error Log Page Entries Supported: 1 00:10:20.679 Keep Alive: Not Supported 00:10:20.679 00:10:20.679 NVM Command Set Attributes 00:10:20.679 ========================== 00:10:20.679 Submission Queue Entry Size 00:10:20.679 Max: 64 00:10:20.679 Min: 64 00:10:20.679 Completion Queue Entry Size 00:10:20.679 Max: 16 00:10:20.679 Min: 16 00:10:20.679 Number of Namespaces: 256 00:10:20.679 Compare Command: Supported 00:10:20.680 Write Uncorrectable Command: Not Supported 00:10:20.680 Dataset Management Command: Supported 00:10:20.680 Write Zeroes Command: Supported 00:10:20.680 Set Features Save Field: Supported 00:10:20.680 Reservations: Not Supported 00:10:20.680 Timestamp: Supported 00:10:20.680 Copy: Supported 00:10:20.680 Volatile Write Cache: Present 00:10:20.680 Atomic Write Unit (Normal): 1 00:10:20.680 Atomic Write Unit (PFail): 1 00:10:20.680 Atomic Compare & Write Unit: 1 00:10:20.680 Fused Compare & Write: Not Supported 00:10:20.680 Scatter-Gather List 00:10:20.680 SGL Command Set: Supported 00:10:20.680 SGL Keyed: Not Supported 00:10:20.680 SGL Bit Bucket Descriptor: Not Supported 00:10:20.680 SGL Metadata Pointer: Not Supported 00:10:20.680 Oversized SGL: Not Supported 00:10:20.680 SGL Metadata Address: Not Supported 00:10:20.680 SGL Offset: Not Supported 00:10:20.680 Transport SGL Data Block: Not Supported 00:10:20.680 Replay Protected Memory Block: Not Supported 00:10:20.680 00:10:20.680 Firmware Slot Information 00:10:20.680 ========================= 00:10:20.680 Active slot: 1 00:10:20.680 Slot 1 Firmware Revision: 1.0 00:10:20.680 00:10:20.680 00:10:20.680 Commands Supported and Effects 00:10:20.680 ============================== 00:10:20.680 Admin Commands 00:10:20.680 -------------- 00:10:20.680 Delete I/O Submission Queue (00h): Supported 00:10:20.680 Create I/O Submission Queue (01h): Supported 00:10:20.680 Get Log Page (02h): Supported 00:10:20.680 Delete I/O Completion Queue (04h): Supported 00:10:20.680 Create I/O Completion Queue (05h): Supported 00:10:20.680 Identify (06h): Supported 00:10:20.680 Abort (08h): Supported 00:10:20.680 Set Features (09h): Supported 00:10:20.680 Get Features (0Ah): Supported 00:10:20.680 Asynchronous Event Request (0Ch): Supported 00:10:20.680 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:20.680 Directive Send (19h): Supported 00:10:20.680 Directive Receive (1Ah): Supported 00:10:20.680 Virtualization Management (1Ch): Supported 00:10:20.680 Doorbell Buffer Config (7Ch): Supported 00:10:20.680 Format NVM (80h): Supported LBA-Change 00:10:20.680 I/O Commands 00:10:20.680 ------------ 00:10:20.680 Flush (00h): Supported LBA-Change 00:10:20.680 Write (01h): Supported LBA-Change 00:10:20.680 Read (02h): Supported 00:10:20.680 Compare (05h): Supported 00:10:20.680 Write Zeroes (08h): Supported LBA-Change 00:10:20.680 Dataset Management (09h): Supported LBA-Change 00:10:20.680 Unknown (0Ch): Supported 00:10:20.680 Unknown (12h): Supported 00:10:20.680 Copy (19h): Supported LBA-Change 00:10:20.680 Unknown (1Dh): Supported LBA-Change 00:10:20.680 00:10:20.680 Error Log 00:10:20.680 ========= 00:10:20.680 00:10:20.680 Arbitration 00:10:20.680 =========== 00:10:20.680 Arbitration Burst: no limit 00:10:20.680 00:10:20.680 Power Management 00:10:20.680 ================ 00:10:20.680 Number of Power States: 1 00:10:20.680 Current Power State: Power State #0 00:10:20.680 Power State #0: 00:10:20.680 Max Power: 25.00 W 00:10:20.680 Non-Operational State: Operational 00:10:20.680 Entry Latency: 16 microseconds 00:10:20.680 Exit Latency: 4 microseconds 00:10:20.680 Relative Read Throughput: 0 00:10:20.680 Relative Read Latency: 0 00:10:20.680 Relative Write Throughput: 0 00:10:20.680 Relative Write Latency: 0 00:10:20.680 Idle Power: Not Reported 00:10:20.680 Active Power: Not Reported 00:10:20.680 Non-Operational Permissive Mode: Not Supported 00:10:20.680 00:10:20.680 Health Information 00:10:20.680 ================== 00:10:20.680 Critical Warnings: 00:10:20.680 Available Spare Space: OK 00:10:20.680 Temperature: OK 00:10:20.680 Device Reliability: OK 00:10:20.680 Read Only: No 00:10:20.680 Volatile Memory Backup: OK 00:10:20.680 Current Temperature: 323 Kelvin (50 Celsius) 00:10:20.680 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:20.680 Available Spare: 0% 00:10:20.680 Available Spare Threshold: 0% 00:10:20.680 Life Percentage Used: 0% 00:10:20.680 Data Units Read: 1062 00:10:20.680 Data Units Written: 955 00:10:20.680 Host Read Commands: 38447 00:10:20.680 Host Write Commands: 37037 00:10:20.680 Controller Busy Time: 0 minutes 00:10:20.680 Power Cycles: 0 00:10:20.680 Power On Hours: 0 hours 00:10:20.680 Unsafe Shutdowns: 0 00:10:20.680 Unrecoverable Media Errors: 0 00:10:20.680 Lifetime Error Log Entries: 0 00:10:20.680 Warning Temperature Time: 0 minutes 00:10:20.680 Critical Temperature Time: 0 minutes 00:10:20.680 00:10:20.680 Number of Queues 00:10:20.680 ================ 00:10:20.680 Number of I/O Submission Queues: 64 00:10:20.680 Number of I/O Completion Queues: 64 00:10:20.680 00:10:20.680 ZNS Specific Controller Data 00:10:20.680 ============================ 00:10:20.680 Zone Append Size Limit: 0 00:10:20.680 00:10:20.680 00:10:20.680 Active Namespaces 00:10:20.680 ================= 00:10:20.680 Namespace ID:1 00:10:20.680 Error Recovery Timeout: Unlimited 00:10:20.680 Command Set Identifier: NVM (00h) 00:10:20.680 Deallocate: Supported 00:10:20.680 Deallocated/Unwritten Error: Supported 00:10:20.680 Deallocated Read Value: All 0x00 00:10:20.680 Deallocate in Write Zeroes: Not Supported 00:10:20.680 Deallocated Guard Field: 0xFFFF 00:10:20.680 Flush: Supported 00:10:20.680 Reservation: Not Supported 00:10:20.680 Namespace Sharing Capabilities: Multiple Controllers 00:10:20.680 Size (in LBAs): 262144 (1GiB) 00:10:20.680 Capacity (in LBAs): 262144 (1GiB) 00:10:20.680 Utilization (in LBAs): 262144 (1GiB) 00:10:20.680 Thin Provisioning: Not Supported 00:10:20.680 Per-NS Atomic Units: No 00:10:20.680 Maximum Single Source Range Length: 128 00:10:20.680 Maximum Copy Length: 128 00:10:20.680 Maximum Source Range Count: 128 00:10:20.680 NGUID/EUI64 Never Reused: No 00:10:20.680 Namespace Write Protected: No 00:10:20.680 Endurance group ID: 1 00:10:20.680 Number of LBA Formats: 8 00:10:20.680 Current LBA Format: LBA Format #04 00:10:20.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:20.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:20.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:20.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:20.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:20.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:20.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:20.680 00:10:20.680 Get Feature FDP: 00:10:20.680 ================ 00:10:20.680 Enabled: Yes 00:10:20.680 FDP configuration index: 0 00:10:20.680 00:10:20.680 FDP configurations log page 00:10:20.680 =========================== 00:10:20.680 Number of FDP configurations: 1 00:10:20.680 Version: 0 00:10:20.680 Size: 112 00:10:20.680 FDP Configuration Descriptor: 0 00:10:20.680 Descriptor Size: 96 00:10:20.680 Reclaim Group Identifier format: 2 00:10:20.680 FDP Volatile Write Cache: Not Present 00:10:20.680 FDP Configuration: Valid 00:10:20.680 Vendor Specific Size: 0 00:10:20.680 Number of Reclaim Groups: 2 00:10:20.680 Number of Recalim Unit Handles: 8 00:10:20.680 Max Placement Identifiers: 128 00:10:20.680 Number of Namespaces Suppprted: 256 00:10:20.680 Reclaim unit Nominal Size: 6000000 bytes 00:10:20.680 Estimated Reclaim Unit Time Limit: Not Reported 00:10:20.680 RUH Desc #000: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #001: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #002: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #003: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #004: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #005: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #006: RUH Type: Initially Isolated 00:10:20.680 RUH Desc #007: RUH Type: Initially Isolated 00:10:20.680 00:10:20.680 FDP reclaim unit handle usage log page 00:10:20.680 ====================================== 00:10:20.680 Number of Reclaim Unit Handles: 8 00:10:20.680 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:20.680 RUH Usage Desc #001: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #002: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #003: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #004: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #005: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #006: RUH Attributes: Unused 00:10:20.680 RUH Usage Desc #007: RUH Attributes: Unused 00:10:20.680 00:10:20.680 FDP statistics log page 00:10:20.680 ======================= 00:10:20.680 Host bytes with metadata written: 604676096 00:10:20.680 Media bytes with metadata written: 604758016 00:10:20.680 Media bytes erased: 0 00:10:20.680 00:10:20.680 FDP events log page 00:10:20.680 =================== 00:10:20.680 Number of FDP events: 0 00:10:20.680 00:10:20.680 00:10:20.680 real 0m1.548s 00:10:20.680 user 0m0.537s 00:10:20.680 sys 0m0.802s 00:10:20.680 01:20:05 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.680 01:20:05 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 ************************************ 00:10:20.680 END TEST nvme_identify 00:10:20.680 ************************************ 00:10:20.938 01:20:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:20.938 01:20:06 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:20.938 01:20:06 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.938 01:20:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.938 ************************************ 00:10:20.938 START TEST nvme_perf 00:10:20.938 ************************************ 00:10:20.938 01:20:06 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:10:20.938 01:20:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:22.317 Initializing NVMe Controllers 00:10:22.317 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:22.317 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:22.317 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:22.317 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:22.317 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:22.317 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:22.317 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:22.317 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:22.317 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:22.317 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:22.317 Initialization complete. Launching workers. 00:10:22.317 ======================================================== 00:10:22.317 Latency(us) 00:10:22.317 Device Information : IOPS MiB/s Average min max 00:10:22.317 PCIE (0000:00:10.0) NSID 1 from core 0: 13746.09 161.09 9315.32 6237.92 41788.67 00:10:22.317 PCIE (0000:00:11.0) NSID 1 from core 0: 13746.09 161.09 9309.52 6051.02 40967.70 00:10:22.317 PCIE (0000:00:13.0) NSID 1 from core 0: 13746.09 161.09 9301.88 5310.45 41010.94 00:10:22.317 PCIE (0000:00:12.0) NSID 1 from core 0: 13746.09 161.09 9294.04 4959.25 40577.07 00:10:22.317 PCIE (0000:00:12.0) NSID 2 from core 0: 13746.09 161.09 9286.20 4625.71 40080.39 00:10:22.317 PCIE (0000:00:12.0) NSID 3 from core 0: 13810.02 161.84 9234.94 4314.12 34803.77 00:10:22.317 ======================================================== 00:10:22.317 Total : 82540.47 967.27 9290.27 4314.12 41788.67 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 7948.543us 00:10:22.317 10.00000% : 8264.379us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8843.412us 00:10:22.317 75.00000% : 9211.888us 00:10:22.317 90.00000% : 9685.642us 00:10:22.317 95.00000% : 11054.265us 00:10:22.317 98.00000% : 16212.922us 00:10:22.317 99.00000% : 19266.005us 00:10:22.317 99.50000% : 35794.763us 00:10:22.317 99.90000% : 41479.814us 00:10:22.317 99.99000% : 41900.929us 00:10:22.317 99.99900% : 41900.929us 00:10:22.317 99.99990% : 41900.929us 00:10:22.317 99.99999% : 41900.929us 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 8053.822us 00:10:22.317 10.00000% : 8317.018us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8790.773us 00:10:22.317 75.00000% : 9159.248us 00:10:22.317 90.00000% : 9633.002us 00:10:22.317 95.00000% : 11317.462us 00:10:22.317 98.00000% : 16949.873us 00:10:22.317 99.00000% : 18634.333us 00:10:22.317 99.50000% : 35584.206us 00:10:22.317 99.90000% : 40848.141us 00:10:22.317 99.99000% : 41058.699us 00:10:22.317 99.99900% : 41058.699us 00:10:22.317 99.99990% : 41058.699us 00:10:22.317 99.99999% : 41058.699us 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 8001.182us 00:10:22.317 10.00000% : 8317.018us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8790.773us 00:10:22.317 75.00000% : 9159.248us 00:10:22.317 90.00000% : 9633.002us 00:10:22.317 95.00000% : 11054.265us 00:10:22.317 98.00000% : 17160.431us 00:10:22.317 99.00000% : 18634.333us 00:10:22.317 99.50000% : 35584.206us 00:10:22.317 99.90000% : 40848.141us 00:10:22.317 99.99000% : 41058.699us 00:10:22.317 99.99900% : 41058.699us 00:10:22.317 99.99990% : 41058.699us 00:10:22.317 99.99999% : 41058.699us 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 7948.543us 00:10:22.317 10.00000% : 8317.018us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8790.773us 00:10:22.317 75.00000% : 9159.248us 00:10:22.317 90.00000% : 9633.002us 00:10:22.317 95.00000% : 10843.708us 00:10:22.317 98.00000% : 16844.594us 00:10:22.317 99.00000% : 19266.005us 00:10:22.317 99.50000% : 35163.091us 00:10:22.317 99.90000% : 40427.027us 00:10:22.317 99.99000% : 40637.584us 00:10:22.317 99.99900% : 40637.584us 00:10:22.317 99.99990% : 40637.584us 00:10:22.317 99.99999% : 40637.584us 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 7895.904us 00:10:22.317 10.00000% : 8317.018us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8790.773us 00:10:22.317 75.00000% : 9159.248us 00:10:22.317 90.00000% : 9580.363us 00:10:22.317 95.00000% : 10685.790us 00:10:22.317 98.00000% : 16423.480us 00:10:22.317 99.00000% : 20108.235us 00:10:22.317 99.50000% : 34741.976us 00:10:22.317 99.90000% : 40005.912us 00:10:22.317 99.99000% : 40216.469us 00:10:22.317 99.99900% : 40216.469us 00:10:22.317 99.99990% : 40216.469us 00:10:22.317 99.99999% : 40216.469us 00:10:22.317 00:10:22.317 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:22.317 ================================================================================= 00:10:22.317 1.00000% : 7948.543us 00:10:22.317 10.00000% : 8317.018us 00:10:22.317 25.00000% : 8527.576us 00:10:22.317 50.00000% : 8843.412us 00:10:22.317 75.00000% : 9159.248us 00:10:22.317 90.00000% : 9633.002us 00:10:22.317 95.00000% : 10896.347us 00:10:22.317 98.00000% : 15897.086us 00:10:22.317 99.00000% : 19897.677us 00:10:22.317 99.50000% : 29056.925us 00:10:22.317 99.90000% : 34741.976us 00:10:22.317 99.99000% : 34952.533us 00:10:22.317 99.99900% : 34952.533us 00:10:22.317 99.99990% : 34952.533us 00:10:22.317 99.99999% : 34952.533us 00:10:22.317 00:10:22.317 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:22.317 ============================================================================== 00:10:22.317 Range in us Cumulative IO count 00:10:22.317 6237.764 - 6264.084: 0.0291% ( 4) 00:10:22.317 6264.084 - 6290.403: 0.0436% ( 2) 00:10:22.317 6290.403 - 6316.723: 0.0581% ( 2) 00:10:22.317 6316.723 - 6343.043: 0.0727% ( 2) 00:10:22.317 6343.043 - 6369.362: 0.0799% ( 1) 00:10:22.317 6369.362 - 6395.682: 0.0945% ( 2) 00:10:22.317 6395.682 - 6422.002: 0.1090% ( 2) 00:10:22.317 6422.002 - 6448.321: 0.1308% ( 3) 00:10:22.317 6448.321 - 6474.641: 0.1453% ( 2) 00:10:22.317 6474.641 - 6500.961: 0.1526% ( 1) 00:10:22.317 6500.961 - 6527.280: 0.1672% ( 2) 00:10:22.317 6527.280 - 6553.600: 0.1744% ( 1) 00:10:22.317 6553.600 - 6579.920: 0.2035% ( 4) 00:10:22.317 6606.239 - 6632.559: 0.2180% ( 2) 00:10:22.317 6632.559 - 6658.879: 0.2398% ( 3) 00:10:22.318 6658.879 - 6685.198: 0.2471% ( 1) 00:10:22.318 6685.198 - 6711.518: 0.2616% ( 2) 00:10:22.318 6711.518 - 6737.838: 0.2762% ( 2) 00:10:22.318 6737.838 - 6790.477: 0.2980% ( 3) 00:10:22.318 6790.477 - 6843.116: 0.3270% ( 4) 00:10:22.318 6843.116 - 6895.756: 0.3634% ( 5) 00:10:22.318 6895.756 - 6948.395: 0.3852% ( 3) 00:10:22.318 6948.395 - 7001.035: 0.4070% ( 3) 00:10:22.318 7001.035 - 7053.674: 0.4360% ( 4) 00:10:22.318 7053.674 - 7106.313: 0.4651% ( 4) 00:10:22.318 7737.986 - 7790.625: 0.5087% ( 6) 00:10:22.318 7790.625 - 7843.264: 0.5741% ( 9) 00:10:22.318 7843.264 - 7895.904: 0.7413% ( 23) 00:10:22.318 7895.904 - 7948.543: 1.2427% ( 69) 00:10:22.318 7948.543 - 8001.182: 2.0494% ( 111) 00:10:22.318 8001.182 - 8053.822: 3.2776% ( 169) 00:10:22.318 8053.822 - 8106.461: 4.7020% ( 196) 00:10:22.318 8106.461 - 8159.100: 6.5116% ( 249) 00:10:22.318 8159.100 - 8211.740: 8.5247% ( 277) 00:10:22.318 8211.740 - 8264.379: 11.3372% ( 387) 00:10:22.318 8264.379 - 8317.018: 14.3169% ( 410) 00:10:22.318 8317.018 - 8369.658: 17.4273% ( 428) 00:10:22.318 8369.658 - 8422.297: 20.6759% ( 447) 00:10:22.318 8422.297 - 8474.937: 24.2297% ( 489) 00:10:22.318 8474.937 - 8527.576: 27.8198% ( 494) 00:10:22.318 8527.576 - 8580.215: 31.8169% ( 550) 00:10:22.318 8580.215 - 8632.855: 36.1410% ( 595) 00:10:22.318 8632.855 - 8685.494: 40.2834% ( 570) 00:10:22.318 8685.494 - 8738.133: 44.4985% ( 580) 00:10:22.318 8738.133 - 8790.773: 48.8081% ( 593) 00:10:22.318 8790.773 - 8843.412: 53.1468% ( 597) 00:10:22.318 8843.412 - 8896.051: 57.4564% ( 593) 00:10:22.318 8896.051 - 8948.691: 61.2137% ( 517) 00:10:22.318 8948.691 - 9001.330: 64.6802% ( 477) 00:10:22.318 9001.330 - 9053.969: 68.1468% ( 477) 00:10:22.318 9053.969 - 9106.609: 71.3445% ( 440) 00:10:22.318 9106.609 - 9159.248: 74.4259% ( 424) 00:10:22.318 9159.248 - 9211.888: 77.2238% ( 385) 00:10:22.318 9211.888 - 9264.527: 79.4840% ( 311) 00:10:22.318 9264.527 - 9317.166: 81.4971% ( 277) 00:10:22.318 9317.166 - 9369.806: 83.1759% ( 231) 00:10:22.318 9369.806 - 9422.445: 84.7602% ( 218) 00:10:22.318 9422.445 - 9475.084: 86.2209% ( 201) 00:10:22.318 9475.084 - 9527.724: 87.5218% ( 179) 00:10:22.318 9527.724 - 9580.363: 88.6701% ( 158) 00:10:22.318 9580.363 - 9633.002: 89.5422% ( 120) 00:10:22.318 9633.002 - 9685.642: 90.2616% ( 99) 00:10:22.318 9685.642 - 9738.281: 90.8358% ( 79) 00:10:22.318 9738.281 - 9790.920: 91.3154% ( 66) 00:10:22.318 9790.920 - 9843.560: 91.7878% ( 65) 00:10:22.318 9843.560 - 9896.199: 92.2311% ( 61) 00:10:22.318 9896.199 - 9948.839: 92.6526% ( 58) 00:10:22.318 9948.839 - 10001.478: 92.9360% ( 39) 00:10:22.318 10001.478 - 10054.117: 93.2776% ( 47) 00:10:22.318 10054.117 - 10106.757: 93.5102% ( 32) 00:10:22.318 10106.757 - 10159.396: 93.6846% ( 24) 00:10:22.318 10159.396 - 10212.035: 93.8009% ( 16) 00:10:22.318 10212.035 - 10264.675: 93.9462% ( 20) 00:10:22.318 10264.675 - 10317.314: 94.0698% ( 17) 00:10:22.318 10317.314 - 10369.953: 94.1497% ( 11) 00:10:22.318 10369.953 - 10422.593: 94.2442% ( 13) 00:10:22.318 10422.593 - 10475.232: 94.3387% ( 13) 00:10:22.318 10475.232 - 10527.871: 94.4113% ( 10) 00:10:22.318 10527.871 - 10580.511: 94.4985% ( 12) 00:10:22.318 10580.511 - 10633.150: 94.5640% ( 9) 00:10:22.318 10633.150 - 10685.790: 94.6148% ( 7) 00:10:22.318 10685.790 - 10738.429: 94.6802% ( 9) 00:10:22.318 10738.429 - 10791.068: 94.7166% ( 5) 00:10:22.318 10791.068 - 10843.708: 94.7820% ( 9) 00:10:22.318 10843.708 - 10896.347: 94.8401% ( 8) 00:10:22.318 10896.347 - 10948.986: 94.8910% ( 7) 00:10:22.318 10948.986 - 11001.626: 94.9419% ( 7) 00:10:22.318 11001.626 - 11054.265: 95.0000% ( 8) 00:10:22.318 11054.265 - 11106.904: 95.0581% ( 8) 00:10:22.318 11106.904 - 11159.544: 95.1235% ( 9) 00:10:22.318 11159.544 - 11212.183: 95.1962% ( 10) 00:10:22.318 11212.183 - 11264.822: 95.2544% ( 8) 00:10:22.318 11264.822 - 11317.462: 95.2980% ( 6) 00:10:22.318 11317.462 - 11370.101: 95.3634% ( 9) 00:10:22.318 11370.101 - 11422.741: 95.4288% ( 9) 00:10:22.318 11422.741 - 11475.380: 95.5233% ( 13) 00:10:22.318 11475.380 - 11528.019: 95.5669% ( 6) 00:10:22.318 11528.019 - 11580.659: 95.6105% ( 6) 00:10:22.318 11580.659 - 11633.298: 95.6613% ( 7) 00:10:22.318 11633.298 - 11685.937: 95.6904% ( 4) 00:10:22.318 11685.937 - 11738.577: 95.7195% ( 4) 00:10:22.318 11738.577 - 11791.216: 95.7703% ( 7) 00:10:22.318 11791.216 - 11843.855: 95.8140% ( 6) 00:10:22.318 11843.855 - 11896.495: 95.8430% ( 4) 00:10:22.318 11896.495 - 11949.134: 95.8866% ( 6) 00:10:22.318 11949.134 - 12001.773: 95.9230% ( 5) 00:10:22.318 12001.773 - 12054.413: 95.9666% ( 6) 00:10:22.318 12054.413 - 12107.052: 96.0029% ( 5) 00:10:22.318 12107.052 - 12159.692: 96.0320% ( 4) 00:10:22.318 12159.692 - 12212.331: 96.0756% ( 6) 00:10:22.318 12212.331 - 12264.970: 96.1265% ( 7) 00:10:22.318 12264.970 - 12317.610: 96.1410% ( 2) 00:10:22.318 12317.610 - 12370.249: 96.1919% ( 7) 00:10:22.318 12370.249 - 12422.888: 96.2500% ( 8) 00:10:22.318 12422.888 - 12475.528: 96.2791% ( 4) 00:10:22.318 12475.528 - 12528.167: 96.3081% ( 4) 00:10:22.318 12528.167 - 12580.806: 96.3590% ( 7) 00:10:22.318 12580.806 - 12633.446: 96.3881% ( 4) 00:10:22.318 12633.446 - 12686.085: 96.4244% ( 5) 00:10:22.318 12686.085 - 12738.724: 96.4680% ( 6) 00:10:22.318 12738.724 - 12791.364: 96.5044% ( 5) 00:10:22.318 12791.364 - 12844.003: 96.5407% ( 5) 00:10:22.318 12844.003 - 12896.643: 96.5552% ( 2) 00:10:22.318 12896.643 - 12949.282: 96.5698% ( 2) 00:10:22.318 12949.282 - 13001.921: 96.5988% ( 4) 00:10:22.318 13001.921 - 13054.561: 96.6134% ( 2) 00:10:22.318 13054.561 - 13107.200: 96.6424% ( 4) 00:10:22.318 13107.200 - 13159.839: 96.6642% ( 3) 00:10:22.318 13159.839 - 13212.479: 96.6715% ( 1) 00:10:22.318 13212.479 - 13265.118: 96.6860% ( 2) 00:10:22.318 13265.118 - 13317.757: 96.6933% ( 1) 00:10:22.318 13317.757 - 13370.397: 96.7006% ( 1) 00:10:22.318 13370.397 - 13423.036: 96.7151% ( 2) 00:10:22.318 13423.036 - 13475.676: 96.7224% ( 1) 00:10:22.318 13475.676 - 13580.954: 96.7442% ( 3) 00:10:22.318 13580.954 - 13686.233: 96.7587% ( 2) 00:10:22.318 13686.233 - 13791.512: 96.7805% ( 3) 00:10:22.318 13791.512 - 13896.790: 96.8096% ( 4) 00:10:22.318 13896.790 - 14002.069: 96.8241% ( 2) 00:10:22.318 14002.069 - 14107.348: 96.8387% ( 2) 00:10:22.318 14107.348 - 14212.627: 96.8677% ( 4) 00:10:22.318 14212.627 - 14317.905: 96.8895% ( 3) 00:10:22.318 14317.905 - 14423.184: 96.9113% ( 3) 00:10:22.318 14423.184 - 14528.463: 96.9259% ( 2) 00:10:22.318 14528.463 - 14633.741: 96.9622% ( 5) 00:10:22.318 14633.741 - 14739.020: 96.9985% ( 5) 00:10:22.318 14739.020 - 14844.299: 97.0567% ( 8) 00:10:22.318 14844.299 - 14949.578: 97.1003% ( 6) 00:10:22.318 14949.578 - 15054.856: 97.1439% ( 6) 00:10:22.318 15054.856 - 15160.135: 97.2093% ( 9) 00:10:22.318 15160.135 - 15265.414: 97.2674% ( 8) 00:10:22.318 15265.414 - 15370.692: 97.3401% ( 10) 00:10:22.318 15370.692 - 15475.971: 97.4128% ( 10) 00:10:22.318 15475.971 - 15581.250: 97.4782% ( 9) 00:10:22.318 15581.250 - 15686.529: 97.5581% ( 11) 00:10:22.318 15686.529 - 15791.807: 97.6672% ( 15) 00:10:22.318 15791.807 - 15897.086: 97.7616% ( 13) 00:10:22.318 15897.086 - 16002.365: 97.8706% ( 15) 00:10:22.318 16002.365 - 16107.643: 97.9797% ( 15) 00:10:22.318 16107.643 - 16212.922: 98.0596% ( 11) 00:10:22.318 16212.922 - 16318.201: 98.1323% ( 10) 00:10:22.318 16318.201 - 16423.480: 98.2049% ( 10) 00:10:22.318 16423.480 - 16528.758: 98.2703% ( 9) 00:10:22.318 16528.758 - 16634.037: 98.3430% ( 10) 00:10:22.318 16634.037 - 16739.316: 98.4012% ( 8) 00:10:22.318 16739.316 - 16844.594: 98.4666% ( 9) 00:10:22.318 16844.594 - 16949.873: 98.5247% ( 8) 00:10:22.318 16949.873 - 17055.152: 98.5538% ( 4) 00:10:22.318 17055.152 - 17160.431: 98.5756% ( 3) 00:10:22.318 17160.431 - 17265.709: 98.5974% ( 3) 00:10:22.318 17265.709 - 17370.988: 98.6047% ( 1) 00:10:22.318 18107.939 - 18213.218: 98.6265% ( 3) 00:10:22.318 18213.218 - 18318.496: 98.6773% ( 7) 00:10:22.318 18318.496 - 18423.775: 98.7137% ( 5) 00:10:22.318 18423.775 - 18529.054: 98.7427% ( 4) 00:10:22.318 18529.054 - 18634.333: 98.7863% ( 6) 00:10:22.318 18634.333 - 18739.611: 98.8372% ( 7) 00:10:22.318 18739.611 - 18844.890: 98.8735% ( 5) 00:10:22.318 18844.890 - 18950.169: 98.9099% ( 5) 00:10:22.318 18950.169 - 19055.447: 98.9535% ( 6) 00:10:22.318 19055.447 - 19160.726: 98.9898% ( 5) 00:10:22.318 19160.726 - 19266.005: 99.0262% ( 5) 00:10:22.318 19266.005 - 19371.284: 99.0698% ( 6) 00:10:22.318 34531.418 - 34741.976: 99.1279% ( 8) 00:10:22.318 34741.976 - 34952.533: 99.2224% ( 13) 00:10:22.318 34952.533 - 35163.091: 99.3096% ( 12) 00:10:22.318 35163.091 - 35373.648: 99.4041% ( 13) 00:10:22.318 35373.648 - 35584.206: 99.4985% ( 13) 00:10:22.318 35584.206 - 35794.763: 99.5349% ( 5) 00:10:22.318 40427.027 - 40637.584: 99.6221% ( 12) 00:10:22.318 40637.584 - 40848.141: 99.7093% ( 12) 00:10:22.318 40848.141 - 41058.699: 99.7965% ( 12) 00:10:22.318 41058.699 - 41269.256: 99.8910% ( 13) 00:10:22.318 41269.256 - 41479.814: 99.9491% ( 8) 00:10:22.318 41479.814 - 41690.371: 99.9637% ( 2) 00:10:22.318 41690.371 - 41900.929: 100.0000% ( 5) 00:10:22.318 00:10:22.318 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:22.318 ============================================================================== 00:10:22.318 Range in us Cumulative IO count 00:10:22.318 6027.206 - 6053.526: 0.0073% ( 1) 00:10:22.318 6053.526 - 6079.846: 0.0291% ( 3) 00:10:22.318 6079.846 - 6106.165: 0.0363% ( 1) 00:10:22.318 6106.165 - 6132.485: 0.0509% ( 2) 00:10:22.318 6132.485 - 6158.805: 0.0654% ( 2) 00:10:22.318 6158.805 - 6185.124: 0.0872% ( 3) 00:10:22.318 6185.124 - 6211.444: 0.1017% ( 2) 00:10:22.318 6211.444 - 6237.764: 0.1163% ( 2) 00:10:22.318 6237.764 - 6264.084: 0.1235% ( 1) 00:10:22.318 6264.084 - 6290.403: 0.1381% ( 2) 00:10:22.318 6290.403 - 6316.723: 0.1599% ( 3) 00:10:22.318 6316.723 - 6343.043: 0.1744% ( 2) 00:10:22.318 6343.043 - 6369.362: 0.1890% ( 2) 00:10:22.318 6369.362 - 6395.682: 0.2035% ( 2) 00:10:22.318 6395.682 - 6422.002: 0.2180% ( 2) 00:10:22.318 6422.002 - 6448.321: 0.2398% ( 3) 00:10:22.318 6448.321 - 6474.641: 0.2544% ( 2) 00:10:22.318 6474.641 - 6500.961: 0.2689% ( 2) 00:10:22.318 6500.961 - 6527.280: 0.2834% ( 2) 00:10:22.318 6527.280 - 6553.600: 0.2980% ( 2) 00:10:22.318 6553.600 - 6579.920: 0.3125% ( 2) 00:10:22.318 6579.920 - 6606.239: 0.3343% ( 3) 00:10:22.318 6606.239 - 6632.559: 0.3488% ( 2) 00:10:22.318 6632.559 - 6658.879: 0.3634% ( 2) 00:10:22.318 6658.879 - 6685.198: 0.3779% ( 2) 00:10:22.318 6685.198 - 6711.518: 0.3924% ( 2) 00:10:22.318 6711.518 - 6737.838: 0.4070% ( 2) 00:10:22.318 6737.838 - 6790.477: 0.4433% ( 5) 00:10:22.318 6790.477 - 6843.116: 0.4651% ( 3) 00:10:22.318 7843.264 - 7895.904: 0.4942% ( 4) 00:10:22.318 7895.904 - 7948.543: 0.6541% ( 22) 00:10:22.318 7948.543 - 8001.182: 0.9520% ( 41) 00:10:22.318 8001.182 - 8053.822: 1.7733% ( 113) 00:10:22.318 8053.822 - 8106.461: 2.9360% ( 160) 00:10:22.318 8106.461 - 8159.100: 4.5785% ( 226) 00:10:22.318 8159.100 - 8211.740: 6.4898% ( 263) 00:10:22.318 8211.740 - 8264.379: 8.7427% ( 310) 00:10:22.318 8264.379 - 8317.018: 11.5552% ( 387) 00:10:22.318 8317.018 - 8369.658: 14.9419% ( 466) 00:10:22.318 8369.658 - 8422.297: 18.6919% ( 516) 00:10:22.318 8422.297 - 8474.937: 22.4419% ( 516) 00:10:22.318 8474.937 - 8527.576: 26.5116% ( 560) 00:10:22.318 8527.576 - 8580.215: 30.7195% ( 579) 00:10:22.318 8580.215 - 8632.855: 35.3924% ( 643) 00:10:22.318 8632.855 - 8685.494: 40.2544% ( 669) 00:10:22.318 8685.494 - 8738.133: 45.1381% ( 672) 00:10:22.318 8738.133 - 8790.773: 50.0073% ( 670) 00:10:22.318 8790.773 - 8843.412: 54.6657% ( 641) 00:10:22.318 8843.412 - 8896.051: 58.8953% ( 582) 00:10:22.318 8896.051 - 8948.691: 62.9288% ( 555) 00:10:22.318 8948.691 - 9001.330: 66.6642% ( 514) 00:10:22.318 9001.330 - 9053.969: 70.2471% ( 493) 00:10:22.318 9053.969 - 9106.609: 73.5102% ( 449) 00:10:22.318 9106.609 - 9159.248: 76.3081% ( 385) 00:10:22.318 9159.248 - 9211.888: 78.7863% ( 341) 00:10:22.318 9211.888 - 9264.527: 80.8067% ( 278) 00:10:22.318 9264.527 - 9317.166: 82.7326% ( 265) 00:10:22.318 9317.166 - 9369.806: 84.5349% ( 248) 00:10:22.318 9369.806 - 9422.445: 86.1555% ( 223) 00:10:22.318 9422.445 - 9475.084: 87.5654% ( 194) 00:10:22.318 9475.084 - 9527.724: 88.5901% ( 141) 00:10:22.318 9527.724 - 9580.363: 89.5058% ( 126) 00:10:22.318 9580.363 - 9633.002: 90.3343% ( 114) 00:10:22.318 9633.002 - 9685.642: 90.9956% ( 91) 00:10:22.318 9685.642 - 9738.281: 91.5770% ( 80) 00:10:22.318 9738.281 - 9790.920: 92.1221% ( 75) 00:10:22.318 9790.920 - 9843.560: 92.6017% ( 66) 00:10:22.318 9843.560 - 9896.199: 92.9578% ( 49) 00:10:22.318 9896.199 - 9948.839: 93.2994% ( 47) 00:10:22.318 9948.839 - 10001.478: 93.6047% ( 42) 00:10:22.318 10001.478 - 10054.117: 93.8081% ( 28) 00:10:22.318 10054.117 - 10106.757: 93.9753% ( 23) 00:10:22.318 10106.757 - 10159.396: 94.1061% ( 18) 00:10:22.318 10159.396 - 10212.035: 94.1642% ( 8) 00:10:22.318 10212.035 - 10264.675: 94.2224% ( 8) 00:10:22.318 10264.675 - 10317.314: 94.2878% ( 9) 00:10:22.318 10317.314 - 10369.953: 94.3387% ( 7) 00:10:22.318 10369.953 - 10422.593: 94.3750% ( 5) 00:10:22.318 10422.593 - 10475.232: 94.4041% ( 4) 00:10:22.318 10475.232 - 10527.871: 94.4331% ( 4) 00:10:22.318 10527.871 - 10580.511: 94.4767% ( 6) 00:10:22.318 10580.511 - 10633.150: 94.5131% ( 5) 00:10:22.318 10633.150 - 10685.790: 94.5422% ( 4) 00:10:22.318 10685.790 - 10738.429: 94.5858% ( 6) 00:10:22.318 10738.429 - 10791.068: 94.6148% ( 4) 00:10:22.318 10791.068 - 10843.708: 94.6512% ( 5) 00:10:22.318 10843.708 - 10896.347: 94.7020% ( 7) 00:10:22.318 10896.347 - 10948.986: 94.7529% ( 7) 00:10:22.318 10948.986 - 11001.626: 94.7965% ( 6) 00:10:22.318 11001.626 - 11054.265: 94.8401% ( 6) 00:10:22.318 11054.265 - 11106.904: 94.8765% ( 5) 00:10:22.318 11106.904 - 11159.544: 94.9055% ( 4) 00:10:22.318 11159.544 - 11212.183: 94.9346% ( 4) 00:10:22.318 11212.183 - 11264.822: 94.9782% ( 6) 00:10:22.318 11264.822 - 11317.462: 95.0654% ( 12) 00:10:22.318 11317.462 - 11370.101: 95.1381% ( 10) 00:10:22.318 11370.101 - 11422.741: 95.1890% ( 7) 00:10:22.318 11422.741 - 11475.380: 95.2471% ( 8) 00:10:22.318 11475.380 - 11528.019: 95.2980% ( 7) 00:10:22.318 11528.019 - 11580.659: 95.3561% ( 8) 00:10:22.318 11580.659 - 11633.298: 95.4215% ( 9) 00:10:22.318 11633.298 - 11685.937: 95.4797% ( 8) 00:10:22.318 11685.937 - 11738.577: 95.5451% ( 9) 00:10:22.318 11738.577 - 11791.216: 95.5887% ( 6) 00:10:22.318 11791.216 - 11843.855: 95.6541% ( 9) 00:10:22.318 11843.855 - 11896.495: 95.7122% ( 8) 00:10:22.318 11896.495 - 11949.134: 95.7776% ( 9) 00:10:22.318 11949.134 - 12001.773: 95.8358% ( 8) 00:10:22.318 12001.773 - 12054.413: 95.8939% ( 8) 00:10:22.318 12054.413 - 12107.052: 95.9520% ( 8) 00:10:22.318 12107.052 - 12159.692: 95.9956% ( 6) 00:10:22.318 12159.692 - 12212.331: 96.0174% ( 3) 00:10:22.318 12212.331 - 12264.970: 96.0610% ( 6) 00:10:22.318 12264.970 - 12317.610: 96.1192% ( 8) 00:10:22.318 12317.610 - 12370.249: 96.1701% ( 7) 00:10:22.318 12370.249 - 12422.888: 96.2137% ( 6) 00:10:22.318 12422.888 - 12475.528: 96.2645% ( 7) 00:10:22.318 12475.528 - 12528.167: 96.3081% ( 6) 00:10:22.318 12528.167 - 12580.806: 96.3445% ( 5) 00:10:22.318 12580.806 - 12633.446: 96.3735% ( 4) 00:10:22.318 12633.446 - 12686.085: 96.3953% ( 3) 00:10:22.318 12686.085 - 12738.724: 96.4099% ( 2) 00:10:22.318 12738.724 - 12791.364: 96.4244% ( 2) 00:10:22.318 12791.364 - 12844.003: 96.4390% ( 2) 00:10:22.318 12844.003 - 12896.643: 96.4535% ( 2) 00:10:22.318 12896.643 - 12949.282: 96.4680% ( 2) 00:10:22.318 12949.282 - 13001.921: 96.4898% ( 3) 00:10:22.318 13001.921 - 13054.561: 96.5044% ( 2) 00:10:22.318 13054.561 - 13107.200: 96.5189% ( 2) 00:10:22.318 13107.200 - 13159.839: 96.5334% ( 2) 00:10:22.318 13159.839 - 13212.479: 96.5480% ( 2) 00:10:22.318 13212.479 - 13265.118: 96.5625% ( 2) 00:10:22.318 13265.118 - 13317.757: 96.5916% ( 4) 00:10:22.318 13317.757 - 13370.397: 96.6206% ( 4) 00:10:22.318 13370.397 - 13423.036: 96.6497% ( 4) 00:10:22.318 13423.036 - 13475.676: 96.6715% ( 3) 00:10:22.318 13475.676 - 13580.954: 96.7297% ( 8) 00:10:22.318 13580.954 - 13686.233: 96.7805% ( 7) 00:10:22.318 13686.233 - 13791.512: 96.8314% ( 7) 00:10:22.318 13791.512 - 13896.790: 96.8823% ( 7) 00:10:22.318 13896.790 - 14002.069: 96.9113% ( 4) 00:10:22.318 14002.069 - 14107.348: 96.9331% ( 3) 00:10:22.318 14107.348 - 14212.627: 96.9622% ( 4) 00:10:22.318 14212.627 - 14317.905: 97.0131% ( 7) 00:10:22.318 14317.905 - 14423.184: 97.0712% ( 8) 00:10:22.318 14423.184 - 14528.463: 97.1294% ( 8) 00:10:22.318 14528.463 - 14633.741: 97.1948% ( 9) 00:10:22.318 14633.741 - 14739.020: 97.2602% ( 9) 00:10:22.318 14739.020 - 14844.299: 97.3183% ( 8) 00:10:22.318 14844.299 - 14949.578: 97.3765% ( 8) 00:10:22.318 14949.578 - 15054.856: 97.4346% ( 8) 00:10:22.318 15054.856 - 15160.135: 97.4927% ( 8) 00:10:22.318 15160.135 - 15265.414: 97.5581% ( 9) 00:10:22.318 15265.414 - 15370.692: 97.6235% ( 9) 00:10:22.318 15370.692 - 15475.971: 97.6599% ( 5) 00:10:22.318 15475.971 - 15581.250: 97.6744% ( 2) 00:10:22.318 16318.201 - 16423.480: 97.7108% ( 5) 00:10:22.318 16423.480 - 16528.758: 97.7398% ( 4) 00:10:22.318 16528.758 - 16634.037: 97.8052% ( 9) 00:10:22.318 16634.037 - 16739.316: 97.8779% ( 10) 00:10:22.318 16739.316 - 16844.594: 97.9578% ( 11) 00:10:22.318 16844.594 - 16949.873: 98.0378% ( 11) 00:10:22.318 16949.873 - 17055.152: 98.1177% ( 11) 00:10:22.318 17055.152 - 17160.431: 98.2049% ( 12) 00:10:22.318 17160.431 - 17265.709: 98.2776% ( 10) 00:10:22.318 17265.709 - 17370.988: 98.3576% ( 11) 00:10:22.318 17370.988 - 17476.267: 98.4302% ( 10) 00:10:22.318 17476.267 - 17581.545: 98.5029% ( 10) 00:10:22.318 17581.545 - 17686.824: 98.5756% ( 10) 00:10:22.318 17686.824 - 17792.103: 98.6555% ( 11) 00:10:22.318 17792.103 - 17897.382: 98.7355% ( 11) 00:10:22.318 17897.382 - 18002.660: 98.7791% ( 6) 00:10:22.318 18002.660 - 18107.939: 98.8299% ( 7) 00:10:22.318 18107.939 - 18213.218: 98.8735% ( 6) 00:10:22.318 18213.218 - 18318.496: 98.9172% ( 6) 00:10:22.318 18318.496 - 18423.775: 98.9608% ( 6) 00:10:22.318 18423.775 - 18529.054: 98.9898% ( 4) 00:10:22.318 18529.054 - 18634.333: 99.0262% ( 5) 00:10:22.318 18634.333 - 18739.611: 99.0698% ( 6) 00:10:22.318 34320.861 - 34531.418: 99.0988% ( 4) 00:10:22.319 34531.418 - 34741.976: 99.1933% ( 13) 00:10:22.319 34741.976 - 34952.533: 99.2951% ( 14) 00:10:22.319 34952.533 - 35163.091: 99.3968% ( 14) 00:10:22.319 35163.091 - 35373.648: 99.4913% ( 13) 00:10:22.319 35373.648 - 35584.206: 99.5349% ( 6) 00:10:22.319 39795.354 - 40005.912: 99.5422% ( 1) 00:10:22.319 40005.912 - 40216.469: 99.6366% ( 13) 00:10:22.319 40216.469 - 40427.027: 99.7384% ( 14) 00:10:22.319 40427.027 - 40637.584: 99.8328% ( 13) 00:10:22.319 40637.584 - 40848.141: 99.9419% ( 15) 00:10:22.319 40848.141 - 41058.699: 100.0000% ( 8) 00:10:22.319 00:10:22.319 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:22.319 ============================================================================== 00:10:22.319 Range in us Cumulative IO count 00:10:22.319 5290.255 - 5316.575: 0.0145% ( 2) 00:10:22.319 5316.575 - 5342.895: 0.0436% ( 4) 00:10:22.319 5342.895 - 5369.214: 0.0654% ( 3) 00:10:22.319 5369.214 - 5395.534: 0.0727% ( 1) 00:10:22.319 5395.534 - 5421.854: 0.0872% ( 2) 00:10:22.319 5421.854 - 5448.173: 0.0945% ( 1) 00:10:22.319 5448.173 - 5474.493: 0.1090% ( 2) 00:10:22.319 5474.493 - 5500.813: 0.1235% ( 2) 00:10:22.319 5500.813 - 5527.133: 0.1381% ( 2) 00:10:22.319 5527.133 - 5553.452: 0.1526% ( 2) 00:10:22.319 5553.452 - 5579.772: 0.1672% ( 2) 00:10:22.319 5579.772 - 5606.092: 0.1817% ( 2) 00:10:22.319 5606.092 - 5632.411: 0.2035% ( 3) 00:10:22.319 5632.411 - 5658.731: 0.2180% ( 2) 00:10:22.319 5658.731 - 5685.051: 0.2326% ( 2) 00:10:22.319 5685.051 - 5711.370: 0.2471% ( 2) 00:10:22.319 5711.370 - 5737.690: 0.2616% ( 2) 00:10:22.319 5737.690 - 5764.010: 0.2834% ( 3) 00:10:22.319 5764.010 - 5790.329: 0.2980% ( 2) 00:10:22.319 5790.329 - 5816.649: 0.3125% ( 2) 00:10:22.319 5816.649 - 5842.969: 0.3270% ( 2) 00:10:22.319 5842.969 - 5869.288: 0.3416% ( 2) 00:10:22.319 5869.288 - 5895.608: 0.3561% ( 2) 00:10:22.319 5895.608 - 5921.928: 0.3779% ( 3) 00:10:22.319 5921.928 - 5948.247: 0.3924% ( 2) 00:10:22.319 5948.247 - 5974.567: 0.4070% ( 2) 00:10:22.319 5974.567 - 6000.887: 0.4215% ( 2) 00:10:22.319 6000.887 - 6027.206: 0.4433% ( 3) 00:10:22.319 6027.206 - 6053.526: 0.4578% ( 2) 00:10:22.319 6053.526 - 6079.846: 0.4651% ( 1) 00:10:22.319 7685.346 - 7737.986: 0.5160% ( 7) 00:10:22.319 7737.986 - 7790.625: 0.5523% ( 5) 00:10:22.319 7790.625 - 7843.264: 0.5669% ( 2) 00:10:22.319 7843.264 - 7895.904: 0.6541% ( 12) 00:10:22.319 7895.904 - 7948.543: 0.8430% ( 26) 00:10:22.319 7948.543 - 8001.182: 1.2718% ( 59) 00:10:22.319 8001.182 - 8053.822: 2.1439% ( 120) 00:10:22.319 8053.822 - 8106.461: 3.3358% ( 164) 00:10:22.319 8106.461 - 8159.100: 4.8110% ( 203) 00:10:22.319 8159.100 - 8211.740: 6.6933% ( 259) 00:10:22.319 8211.740 - 8264.379: 9.0044% ( 318) 00:10:22.319 8264.379 - 8317.018: 11.8677% ( 394) 00:10:22.319 8317.018 - 8369.658: 15.2253% ( 462) 00:10:22.319 8369.658 - 8422.297: 18.8517% ( 499) 00:10:22.319 8422.297 - 8474.937: 22.8488% ( 550) 00:10:22.319 8474.937 - 8527.576: 26.9404% ( 563) 00:10:22.319 8527.576 - 8580.215: 31.2936% ( 599) 00:10:22.319 8580.215 - 8632.855: 35.9811% ( 645) 00:10:22.319 8632.855 - 8685.494: 40.6977% ( 649) 00:10:22.319 8685.494 - 8738.133: 45.5233% ( 664) 00:10:22.319 8738.133 - 8790.773: 50.4142% ( 673) 00:10:22.319 8790.773 - 8843.412: 54.9201% ( 620) 00:10:22.319 8843.412 - 8896.051: 59.1497% ( 582) 00:10:22.319 8896.051 - 8948.691: 63.1904% ( 556) 00:10:22.319 8948.691 - 9001.330: 67.0567% ( 532) 00:10:22.319 9001.330 - 9053.969: 70.7631% ( 510) 00:10:22.319 9053.969 - 9106.609: 74.1352% ( 464) 00:10:22.319 9106.609 - 9159.248: 76.9186% ( 383) 00:10:22.319 9159.248 - 9211.888: 79.2297% ( 318) 00:10:22.319 9211.888 - 9264.527: 81.2645% ( 280) 00:10:22.319 9264.527 - 9317.166: 83.0959% ( 252) 00:10:22.319 9317.166 - 9369.806: 84.8110% ( 236) 00:10:22.319 9369.806 - 9422.445: 86.4462% ( 225) 00:10:22.319 9422.445 - 9475.084: 87.8488% ( 193) 00:10:22.319 9475.084 - 9527.724: 88.9535% ( 152) 00:10:22.319 9527.724 - 9580.363: 89.8692% ( 126) 00:10:22.319 9580.363 - 9633.002: 90.6759% ( 111) 00:10:22.319 9633.002 - 9685.642: 91.4026% ( 100) 00:10:22.319 9685.642 - 9738.281: 91.9985% ( 82) 00:10:22.319 9738.281 - 9790.920: 92.5145% ( 71) 00:10:22.319 9790.920 - 9843.560: 92.9433% ( 59) 00:10:22.319 9843.560 - 9896.199: 93.2849% ( 47) 00:10:22.319 9896.199 - 9948.839: 93.5465% ( 36) 00:10:22.319 9948.839 - 10001.478: 93.7863% ( 33) 00:10:22.319 10001.478 - 10054.117: 93.9680% ( 25) 00:10:22.319 10054.117 - 10106.757: 94.0843% ( 16) 00:10:22.319 10106.757 - 10159.396: 94.1715% ( 12) 00:10:22.319 10159.396 - 10212.035: 94.2442% ( 10) 00:10:22.319 10212.035 - 10264.675: 94.3241% ( 11) 00:10:22.319 10264.675 - 10317.314: 94.3895% ( 9) 00:10:22.319 10317.314 - 10369.953: 94.4404% ( 7) 00:10:22.319 10369.953 - 10422.593: 94.4985% ( 8) 00:10:22.319 10422.593 - 10475.232: 94.5276% ( 4) 00:10:22.319 10475.232 - 10527.871: 94.5567% ( 4) 00:10:22.319 10527.871 - 10580.511: 94.6003% ( 6) 00:10:22.319 10580.511 - 10633.150: 94.6294% ( 4) 00:10:22.319 10633.150 - 10685.790: 94.6657% ( 5) 00:10:22.319 10685.790 - 10738.429: 94.6948% ( 4) 00:10:22.319 10738.429 - 10791.068: 94.7238% ( 4) 00:10:22.319 10791.068 - 10843.708: 94.7674% ( 6) 00:10:22.319 10843.708 - 10896.347: 94.8328% ( 9) 00:10:22.319 10896.347 - 10948.986: 94.8837% ( 7) 00:10:22.319 10948.986 - 11001.626: 94.9709% ( 12) 00:10:22.319 11001.626 - 11054.265: 95.0145% ( 6) 00:10:22.319 11054.265 - 11106.904: 95.0509% ( 5) 00:10:22.319 11106.904 - 11159.544: 95.0872% ( 5) 00:10:22.319 11159.544 - 11212.183: 95.1163% ( 4) 00:10:22.319 11212.183 - 11264.822: 95.1599% ( 6) 00:10:22.319 11264.822 - 11317.462: 95.1890% ( 4) 00:10:22.319 11317.462 - 11370.101: 95.2253% ( 5) 00:10:22.319 11370.101 - 11422.741: 95.2616% ( 5) 00:10:22.319 11422.741 - 11475.380: 95.2980% ( 5) 00:10:22.319 11475.380 - 11528.019: 95.3270% ( 4) 00:10:22.319 11528.019 - 11580.659: 95.3561% ( 4) 00:10:22.319 11580.659 - 11633.298: 95.3997% ( 6) 00:10:22.319 11633.298 - 11685.937: 95.4215% ( 3) 00:10:22.319 11685.937 - 11738.577: 95.4578% ( 5) 00:10:22.319 11738.577 - 11791.216: 95.4869% ( 4) 00:10:22.319 11791.216 - 11843.855: 95.5160% ( 4) 00:10:22.319 11843.855 - 11896.495: 95.5451% ( 4) 00:10:22.319 11896.495 - 11949.134: 95.5814% ( 5) 00:10:22.319 11949.134 - 12001.773: 95.6105% ( 4) 00:10:22.319 12001.773 - 12054.413: 95.6395% ( 4) 00:10:22.319 12054.413 - 12107.052: 95.6759% ( 5) 00:10:22.319 12107.052 - 12159.692: 95.7049% ( 4) 00:10:22.319 12159.692 - 12212.331: 95.7340% ( 4) 00:10:22.319 12212.331 - 12264.970: 95.7631% ( 4) 00:10:22.319 12264.970 - 12317.610: 95.7849% ( 3) 00:10:22.319 12317.610 - 12370.249: 95.7994% ( 2) 00:10:22.319 12370.249 - 12422.888: 95.8140% ( 2) 00:10:22.319 12686.085 - 12738.724: 95.8576% ( 6) 00:10:22.319 12738.724 - 12791.364: 95.9012% ( 6) 00:10:22.319 12791.364 - 12844.003: 95.9375% ( 5) 00:10:22.319 12844.003 - 12896.643: 95.9884% ( 7) 00:10:22.319 12896.643 - 12949.282: 96.0247% ( 5) 00:10:22.319 12949.282 - 13001.921: 96.0683% ( 6) 00:10:22.319 13001.921 - 13054.561: 96.1119% ( 6) 00:10:22.319 13054.561 - 13107.200: 96.1773% ( 9) 00:10:22.319 13107.200 - 13159.839: 96.2282% ( 7) 00:10:22.319 13159.839 - 13212.479: 96.2863% ( 8) 00:10:22.319 13212.479 - 13265.118: 96.3299% ( 6) 00:10:22.319 13265.118 - 13317.757: 96.3735% ( 6) 00:10:22.319 13317.757 - 13370.397: 96.4317% ( 8) 00:10:22.319 13370.397 - 13423.036: 96.4826% ( 7) 00:10:22.319 13423.036 - 13475.676: 96.5407% ( 8) 00:10:22.319 13475.676 - 13580.954: 96.6497% ( 15) 00:10:22.319 13580.954 - 13686.233: 96.7733% ( 17) 00:10:22.319 13686.233 - 13791.512: 96.8677% ( 13) 00:10:22.319 13791.512 - 13896.790: 96.9477% ( 11) 00:10:22.319 13896.790 - 14002.069: 97.0349% ( 12) 00:10:22.319 14002.069 - 14107.348: 97.1221% ( 12) 00:10:22.319 14107.348 - 14212.627: 97.2093% ( 12) 00:10:22.319 14212.627 - 14317.905: 97.3038% ( 13) 00:10:22.319 14317.905 - 14423.184: 97.3692% ( 9) 00:10:22.319 14423.184 - 14528.463: 97.4273% ( 8) 00:10:22.319 14528.463 - 14633.741: 97.4855% ( 8) 00:10:22.319 14633.741 - 14739.020: 97.5509% ( 9) 00:10:22.319 14739.020 - 14844.299: 97.6017% ( 7) 00:10:22.319 14844.299 - 14949.578: 97.6308% ( 4) 00:10:22.319 14949.578 - 15054.856: 97.6599% ( 4) 00:10:22.319 15054.856 - 15160.135: 97.6744% ( 2) 00:10:22.319 16528.758 - 16634.037: 97.6962% ( 3) 00:10:22.319 16634.037 - 16739.316: 97.7616% ( 9) 00:10:22.319 16739.316 - 16844.594: 97.8416% ( 11) 00:10:22.319 16844.594 - 16949.873: 97.9142% ( 10) 00:10:22.319 16949.873 - 17055.152: 97.9724% ( 8) 00:10:22.319 17055.152 - 17160.431: 98.0451% ( 10) 00:10:22.319 17160.431 - 17265.709: 98.1105% ( 9) 00:10:22.319 17265.709 - 17370.988: 98.1904% ( 11) 00:10:22.319 17370.988 - 17476.267: 98.2776% ( 12) 00:10:22.319 17476.267 - 17581.545: 98.3576% ( 11) 00:10:22.319 17581.545 - 17686.824: 98.4375% ( 11) 00:10:22.319 17686.824 - 17792.103: 98.5247% ( 12) 00:10:22.319 17792.103 - 17897.382: 98.6337% ( 15) 00:10:22.319 17897.382 - 18002.660: 98.7282% ( 13) 00:10:22.319 18002.660 - 18107.939: 98.7791% ( 7) 00:10:22.319 18107.939 - 18213.218: 98.8299% ( 7) 00:10:22.319 18213.218 - 18318.496: 98.8808% ( 7) 00:10:22.319 18318.496 - 18423.775: 98.9317% ( 7) 00:10:22.319 18423.775 - 18529.054: 98.9826% ( 7) 00:10:22.319 18529.054 - 18634.333: 99.0262% ( 6) 00:10:22.319 18634.333 - 18739.611: 99.0698% ( 6) 00:10:22.319 34531.418 - 34741.976: 99.1424% ( 10) 00:10:22.319 34741.976 - 34952.533: 99.2442% ( 14) 00:10:22.319 34952.533 - 35163.091: 99.3459% ( 14) 00:10:22.319 35163.091 - 35373.648: 99.4477% ( 14) 00:10:22.319 35373.648 - 35584.206: 99.5349% ( 12) 00:10:22.319 40005.912 - 40216.469: 99.6076% ( 10) 00:10:22.319 40216.469 - 40427.027: 99.7093% ( 14) 00:10:22.319 40427.027 - 40637.584: 99.8110% ( 14) 00:10:22.319 40637.584 - 40848.141: 99.9128% ( 14) 00:10:22.319 40848.141 - 41058.699: 100.0000% ( 12) 00:10:22.319 00:10:22.319 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:22.319 ============================================================================== 00:10:22.319 Range in us Cumulative IO count 00:10:22.319 4948.100 - 4974.419: 0.0145% ( 2) 00:10:22.319 4974.419 - 5000.739: 0.0291% ( 2) 00:10:22.319 5000.739 - 5027.059: 0.0436% ( 2) 00:10:22.319 5027.059 - 5053.378: 0.0581% ( 2) 00:10:22.319 5053.378 - 5079.698: 0.0727% ( 2) 00:10:22.319 5079.698 - 5106.018: 0.0945% ( 3) 00:10:22.319 5106.018 - 5132.337: 0.1090% ( 2) 00:10:22.319 5132.337 - 5158.657: 0.1308% ( 3) 00:10:22.319 5158.657 - 5184.977: 0.1381% ( 1) 00:10:22.319 5184.977 - 5211.296: 0.1526% ( 2) 00:10:22.319 5211.296 - 5237.616: 0.1744% ( 3) 00:10:22.319 5237.616 - 5263.936: 0.1890% ( 2) 00:10:22.319 5263.936 - 5290.255: 0.2035% ( 2) 00:10:22.319 5290.255 - 5316.575: 0.2180% ( 2) 00:10:22.319 5316.575 - 5342.895: 0.2326% ( 2) 00:10:22.319 5342.895 - 5369.214: 0.2471% ( 2) 00:10:22.319 5369.214 - 5395.534: 0.2689% ( 3) 00:10:22.319 5395.534 - 5421.854: 0.2834% ( 2) 00:10:22.319 5421.854 - 5448.173: 0.2980% ( 2) 00:10:22.319 5448.173 - 5474.493: 0.3125% ( 2) 00:10:22.319 5474.493 - 5500.813: 0.3270% ( 2) 00:10:22.319 5500.813 - 5527.133: 0.3416% ( 2) 00:10:22.319 5527.133 - 5553.452: 0.3561% ( 2) 00:10:22.319 5553.452 - 5579.772: 0.3706% ( 2) 00:10:22.319 5579.772 - 5606.092: 0.3924% ( 3) 00:10:22.319 5606.092 - 5632.411: 0.4070% ( 2) 00:10:22.319 5632.411 - 5658.731: 0.4215% ( 2) 00:10:22.319 5658.731 - 5685.051: 0.4360% ( 2) 00:10:22.319 5685.051 - 5711.370: 0.4506% ( 2) 00:10:22.319 5711.370 - 5737.690: 0.4651% ( 2) 00:10:22.319 7264.231 - 7316.871: 0.4724% ( 1) 00:10:22.319 7316.871 - 7369.510: 0.5015% ( 4) 00:10:22.319 7369.510 - 7422.149: 0.5305% ( 4) 00:10:22.319 7422.149 - 7474.789: 0.5669% ( 5) 00:10:22.319 7474.789 - 7527.428: 0.5887% ( 3) 00:10:22.319 7527.428 - 7580.067: 0.6177% ( 4) 00:10:22.319 7580.067 - 7632.707: 0.6541% ( 5) 00:10:22.319 7632.707 - 7685.346: 0.6831% ( 4) 00:10:22.319 7685.346 - 7737.986: 0.7195% ( 5) 00:10:22.319 7737.986 - 7790.625: 0.7340% ( 2) 00:10:22.319 7790.625 - 7843.264: 0.7922% ( 8) 00:10:22.319 7843.264 - 7895.904: 0.8794% ( 12) 00:10:22.319 7895.904 - 7948.543: 1.0247% ( 20) 00:10:22.319 7948.543 - 8001.182: 1.3953% ( 51) 00:10:22.319 8001.182 - 8053.822: 2.2965% ( 124) 00:10:22.319 8053.822 - 8106.461: 3.5174% ( 168) 00:10:22.319 8106.461 - 8159.100: 4.9273% ( 194) 00:10:22.319 8159.100 - 8211.740: 6.8387% ( 263) 00:10:22.319 8211.740 - 8264.379: 9.2951% ( 338) 00:10:22.319 8264.379 - 8317.018: 12.0567% ( 380) 00:10:22.319 8317.018 - 8369.658: 15.4360% ( 465) 00:10:22.319 8369.658 - 8422.297: 19.3532% ( 539) 00:10:22.319 8422.297 - 8474.937: 23.1686% ( 525) 00:10:22.319 8474.937 - 8527.576: 27.1512% ( 548) 00:10:22.319 8527.576 - 8580.215: 31.7151% ( 628) 00:10:22.319 8580.215 - 8632.855: 36.3299% ( 635) 00:10:22.319 8632.855 - 8685.494: 41.0320% ( 647) 00:10:22.319 8685.494 - 8738.133: 45.8939% ( 669) 00:10:22.319 8738.133 - 8790.773: 50.7049% ( 662) 00:10:22.319 8790.773 - 8843.412: 55.2980% ( 632) 00:10:22.319 8843.412 - 8896.051: 59.5567% ( 586) 00:10:22.319 8896.051 - 8948.691: 63.4884% ( 541) 00:10:22.319 8948.691 - 9001.330: 67.2384% ( 516) 00:10:22.319 9001.330 - 9053.969: 70.8721% ( 500) 00:10:22.319 9053.969 - 9106.609: 74.2442% ( 464) 00:10:22.319 9106.609 - 9159.248: 76.9549% ( 373) 00:10:22.319 9159.248 - 9211.888: 79.1061% ( 296) 00:10:22.319 9211.888 - 9264.527: 81.1410% ( 280) 00:10:22.319 9264.527 - 9317.166: 83.0451% ( 262) 00:10:22.319 9317.166 - 9369.806: 84.7674% ( 237) 00:10:22.319 9369.806 - 9422.445: 86.4099% ( 226) 00:10:22.319 9422.445 - 9475.084: 87.6890% ( 176) 00:10:22.319 9475.084 - 9527.724: 88.8299% ( 157) 00:10:22.319 9527.724 - 9580.363: 89.8474% ( 140) 00:10:22.319 9580.363 - 9633.002: 90.6759% ( 114) 00:10:22.319 9633.002 - 9685.642: 91.3517% ( 93) 00:10:22.319 9685.642 - 9738.281: 91.9404% ( 81) 00:10:22.319 9738.281 - 9790.920: 92.4564% ( 71) 00:10:22.319 9790.920 - 9843.560: 92.9360% ( 66) 00:10:22.319 9843.560 - 9896.199: 93.3503% ( 57) 00:10:22.319 9896.199 - 9948.839: 93.6846% ( 46) 00:10:22.319 9948.839 - 10001.478: 93.9462% ( 36) 00:10:22.319 10001.478 - 10054.117: 94.1424% ( 27) 00:10:22.319 10054.117 - 10106.757: 94.2369% ( 13) 00:10:22.319 10106.757 - 10159.396: 94.3169% ( 11) 00:10:22.319 10159.396 - 10212.035: 94.3895% ( 10) 00:10:22.319 10212.035 - 10264.675: 94.4767% ( 12) 00:10:22.319 10264.675 - 10317.314: 94.5276% ( 7) 00:10:22.319 10317.314 - 10369.953: 94.5567% ( 4) 00:10:22.319 10369.953 - 10422.593: 94.5930% ( 5) 00:10:22.319 10422.593 - 10475.232: 94.6221% ( 4) 00:10:22.319 10475.232 - 10527.871: 94.6657% ( 6) 00:10:22.319 10527.871 - 10580.511: 94.7020% ( 5) 00:10:22.319 10580.511 - 10633.150: 94.7384% ( 5) 00:10:22.319 10633.150 - 10685.790: 94.8038% ( 9) 00:10:22.319 10685.790 - 10738.429: 94.8837% ( 11) 00:10:22.319 10738.429 - 10791.068: 94.9419% ( 8) 00:10:22.319 10791.068 - 10843.708: 95.0000% ( 8) 00:10:22.319 10843.708 - 10896.347: 95.0509% ( 7) 00:10:22.319 10896.347 - 10948.986: 95.0945% ( 6) 00:10:22.319 10948.986 - 11001.626: 95.1235% ( 4) 00:10:22.319 11001.626 - 11054.265: 95.1599% ( 5) 00:10:22.319 11054.265 - 11106.904: 95.1962% ( 5) 00:10:22.319 11106.904 - 11159.544: 95.2326% ( 5) 00:10:22.319 11159.544 - 11212.183: 95.2689% ( 5) 00:10:22.319 11212.183 - 11264.822: 95.2980% ( 4) 00:10:22.319 11264.822 - 11317.462: 95.3416% ( 6) 00:10:22.319 11317.462 - 11370.101: 95.3634% ( 3) 00:10:22.319 11370.101 - 11422.741: 95.3997% ( 5) 00:10:22.319 11422.741 - 11475.380: 95.4288% ( 4) 00:10:22.319 11475.380 - 11528.019: 95.4651% ( 5) 00:10:22.319 11528.019 - 11580.659: 95.5015% ( 5) 00:10:22.319 11580.659 - 11633.298: 95.5378% ( 5) 00:10:22.319 11633.298 - 11685.937: 95.5669% ( 4) 00:10:22.319 11685.937 - 11738.577: 95.6105% ( 6) 00:10:22.319 11738.577 - 11791.216: 95.6395% ( 4) 00:10:22.319 11791.216 - 11843.855: 95.6613% ( 3) 00:10:22.319 11843.855 - 11896.495: 95.6977% ( 5) 00:10:22.319 11896.495 - 11949.134: 95.7340% ( 5) 00:10:22.319 11949.134 - 12001.773: 95.7703% ( 5) 00:10:22.319 12001.773 - 12054.413: 95.7994% ( 4) 00:10:22.319 12054.413 - 12107.052: 95.8140% ( 2) 00:10:22.319 12633.446 - 12686.085: 95.8285% ( 2) 00:10:22.319 12686.085 - 12738.724: 95.8430% ( 2) 00:10:22.319 12738.724 - 12791.364: 95.8503% ( 1) 00:10:22.319 12791.364 - 12844.003: 95.8648% ( 2) 00:10:22.319 12844.003 - 12896.643: 95.8794% ( 2) 00:10:22.319 12896.643 - 12949.282: 95.8939% ( 2) 00:10:22.319 12949.282 - 13001.921: 95.9012% ( 1) 00:10:22.319 13001.921 - 13054.561: 95.9375% ( 5) 00:10:22.319 13054.561 - 13107.200: 95.9666% ( 4) 00:10:22.319 13107.200 - 13159.839: 95.9956% ( 4) 00:10:22.319 13159.839 - 13212.479: 96.0320% ( 5) 00:10:22.319 13212.479 - 13265.118: 96.0683% ( 5) 00:10:22.319 13265.118 - 13317.757: 96.0901% ( 3) 00:10:22.319 13317.757 - 13370.397: 96.1337% ( 6) 00:10:22.319 13370.397 - 13423.036: 96.1919% ( 8) 00:10:22.319 13423.036 - 13475.676: 96.2355% ( 6) 00:10:22.319 13475.676 - 13580.954: 96.3227% ( 12) 00:10:22.319 13580.954 - 13686.233: 96.4317% ( 15) 00:10:22.319 13686.233 - 13791.512: 96.5698% ( 19) 00:10:22.319 13791.512 - 13896.790: 96.7078% ( 19) 00:10:22.319 13896.790 - 14002.069: 96.8605% ( 21) 00:10:22.319 14002.069 - 14107.348: 97.0058% ( 20) 00:10:22.319 14107.348 - 14212.627: 97.1512% ( 20) 00:10:22.319 14212.627 - 14317.905: 97.2820% ( 18) 00:10:22.319 14317.905 - 14423.184: 97.3983% ( 16) 00:10:22.319 14423.184 - 14528.463: 97.5073% ( 15) 00:10:22.319 14528.463 - 14633.741: 97.5654% ( 8) 00:10:22.319 14633.741 - 14739.020: 97.6090% ( 6) 00:10:22.320 14739.020 - 14844.299: 97.6381% ( 4) 00:10:22.320 14844.299 - 14949.578: 97.6744% ( 5) 00:10:22.320 16002.365 - 16107.643: 97.7035% ( 4) 00:10:22.320 16107.643 - 16212.922: 97.7398% ( 5) 00:10:22.320 16212.922 - 16318.201: 97.7834% ( 6) 00:10:22.320 16318.201 - 16423.480: 97.8198% ( 5) 00:10:22.320 16423.480 - 16528.758: 97.8634% ( 6) 00:10:22.320 16528.758 - 16634.037: 97.9142% ( 7) 00:10:22.320 16634.037 - 16739.316: 97.9578% ( 6) 00:10:22.320 16739.316 - 16844.594: 98.0015% ( 6) 00:10:22.320 16844.594 - 16949.873: 98.0451% ( 6) 00:10:22.320 16949.873 - 17055.152: 98.1032% ( 8) 00:10:22.320 17055.152 - 17160.431: 98.1686% ( 9) 00:10:22.320 17160.431 - 17265.709: 98.2267% ( 8) 00:10:22.320 17265.709 - 17370.988: 98.2703% ( 6) 00:10:22.320 17370.988 - 17476.267: 98.3067% ( 5) 00:10:22.320 17476.267 - 17581.545: 98.3503% ( 6) 00:10:22.320 17581.545 - 17686.824: 98.3794% ( 4) 00:10:22.320 17686.824 - 17792.103: 98.4230% ( 6) 00:10:22.320 17792.103 - 17897.382: 98.4593% ( 5) 00:10:22.320 17897.382 - 18002.660: 98.5029% ( 6) 00:10:22.320 18002.660 - 18107.939: 98.5465% ( 6) 00:10:22.320 18107.939 - 18213.218: 98.5901% ( 6) 00:10:22.320 18213.218 - 18318.496: 98.6047% ( 2) 00:10:22.320 18318.496 - 18423.775: 98.6265% ( 3) 00:10:22.320 18423.775 - 18529.054: 98.6701% ( 6) 00:10:22.320 18529.054 - 18634.333: 98.7209% ( 7) 00:10:22.320 18634.333 - 18739.611: 98.7718% ( 7) 00:10:22.320 18739.611 - 18844.890: 98.8227% ( 7) 00:10:22.320 18844.890 - 18950.169: 98.8590% ( 5) 00:10:22.320 18950.169 - 19055.447: 98.9099% ( 7) 00:10:22.320 19055.447 - 19160.726: 98.9535% ( 6) 00:10:22.320 19160.726 - 19266.005: 99.0044% ( 7) 00:10:22.320 19266.005 - 19371.284: 99.0480% ( 6) 00:10:22.320 19371.284 - 19476.562: 99.0698% ( 3) 00:10:22.320 34110.304 - 34320.861: 99.1279% ( 8) 00:10:22.320 34320.861 - 34531.418: 99.2297% ( 14) 00:10:22.320 34531.418 - 34741.976: 99.3314% ( 14) 00:10:22.320 34741.976 - 34952.533: 99.4259% ( 13) 00:10:22.320 34952.533 - 35163.091: 99.5276% ( 14) 00:10:22.320 35163.091 - 35373.648: 99.5349% ( 1) 00:10:22.320 39584.797 - 39795.354: 99.6294% ( 13) 00:10:22.320 39795.354 - 40005.912: 99.7311% ( 14) 00:10:22.320 40005.912 - 40216.469: 99.8256% ( 13) 00:10:22.320 40216.469 - 40427.027: 99.9273% ( 14) 00:10:22.320 40427.027 - 40637.584: 100.0000% ( 10) 00:10:22.320 00:10:22.320 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:22.320 ============================================================================== 00:10:22.320 Range in us Cumulative IO count 00:10:22.320 4605.944 - 4632.263: 0.0145% ( 2) 00:10:22.320 4632.263 - 4658.583: 0.0291% ( 2) 00:10:22.320 4658.583 - 4684.903: 0.0436% ( 2) 00:10:22.320 4684.903 - 4711.222: 0.0654% ( 3) 00:10:22.320 4711.222 - 4737.542: 0.0727% ( 1) 00:10:22.320 4737.542 - 4763.862: 0.0945% ( 3) 00:10:22.320 4763.862 - 4790.182: 0.1090% ( 2) 00:10:22.320 4790.182 - 4816.501: 0.1235% ( 2) 00:10:22.320 4816.501 - 4842.821: 0.1453% ( 3) 00:10:22.320 4842.821 - 4869.141: 0.1599% ( 2) 00:10:22.320 4869.141 - 4895.460: 0.1744% ( 2) 00:10:22.320 4895.460 - 4921.780: 0.1890% ( 2) 00:10:22.320 4921.780 - 4948.100: 0.2108% ( 3) 00:10:22.320 4948.100 - 4974.419: 0.2253% ( 2) 00:10:22.320 4974.419 - 5000.739: 0.2398% ( 2) 00:10:22.320 5000.739 - 5027.059: 0.2544% ( 2) 00:10:22.320 5027.059 - 5053.378: 0.2689% ( 2) 00:10:22.320 5053.378 - 5079.698: 0.2834% ( 2) 00:10:22.320 5079.698 - 5106.018: 0.2980% ( 2) 00:10:22.320 5106.018 - 5132.337: 0.3125% ( 2) 00:10:22.320 5132.337 - 5158.657: 0.3270% ( 2) 00:10:22.320 5158.657 - 5184.977: 0.3416% ( 2) 00:10:22.320 5184.977 - 5211.296: 0.3561% ( 2) 00:10:22.320 5211.296 - 5237.616: 0.3706% ( 2) 00:10:22.320 5237.616 - 5263.936: 0.3852% ( 2) 00:10:22.320 5263.936 - 5290.255: 0.3997% ( 2) 00:10:22.320 5290.255 - 5316.575: 0.4142% ( 2) 00:10:22.320 5316.575 - 5342.895: 0.4288% ( 2) 00:10:22.320 5342.895 - 5369.214: 0.4433% ( 2) 00:10:22.320 5369.214 - 5395.534: 0.4578% ( 2) 00:10:22.320 5395.534 - 5421.854: 0.4651% ( 1) 00:10:22.320 7001.035 - 7053.674: 0.4797% ( 2) 00:10:22.320 7053.674 - 7106.313: 0.5087% ( 4) 00:10:22.320 7106.313 - 7158.953: 0.5378% ( 4) 00:10:22.320 7158.953 - 7211.592: 0.5741% ( 5) 00:10:22.320 7211.592 - 7264.231: 0.6032% ( 4) 00:10:22.320 7264.231 - 7316.871: 0.6395% ( 5) 00:10:22.320 7316.871 - 7369.510: 0.6759% ( 5) 00:10:22.320 7369.510 - 7422.149: 0.7049% ( 4) 00:10:22.320 7422.149 - 7474.789: 0.7340% ( 4) 00:10:22.320 7474.789 - 7527.428: 0.7631% ( 4) 00:10:22.320 7527.428 - 7580.067: 0.7922% ( 4) 00:10:22.320 7580.067 - 7632.707: 0.8285% ( 5) 00:10:22.320 7632.707 - 7685.346: 0.8576% ( 4) 00:10:22.320 7685.346 - 7737.986: 0.8939% ( 5) 00:10:22.320 7737.986 - 7790.625: 0.9230% ( 4) 00:10:22.320 7790.625 - 7843.264: 0.9448% ( 3) 00:10:22.320 7843.264 - 7895.904: 1.0102% ( 9) 00:10:22.320 7895.904 - 7948.543: 1.1773% ( 23) 00:10:22.320 7948.543 - 8001.182: 1.5189% ( 47) 00:10:22.320 8001.182 - 8053.822: 2.4201% ( 124) 00:10:22.320 8053.822 - 8106.461: 3.5174% ( 151) 00:10:22.320 8106.461 - 8159.100: 4.9782% ( 201) 00:10:22.320 8159.100 - 8211.740: 6.8096% ( 252) 00:10:22.320 8211.740 - 8264.379: 9.1352% ( 320) 00:10:22.320 8264.379 - 8317.018: 12.0276% ( 398) 00:10:22.320 8317.018 - 8369.658: 15.3561% ( 458) 00:10:22.320 8369.658 - 8422.297: 19.1279% ( 519) 00:10:22.320 8422.297 - 8474.937: 23.1468% ( 553) 00:10:22.320 8474.937 - 8527.576: 27.2093% ( 559) 00:10:22.320 8527.576 - 8580.215: 31.6134% ( 606) 00:10:22.320 8580.215 - 8632.855: 36.2573% ( 639) 00:10:22.320 8632.855 - 8685.494: 40.9811% ( 650) 00:10:22.320 8685.494 - 8738.133: 45.6759% ( 646) 00:10:22.320 8738.133 - 8790.773: 50.5741% ( 674) 00:10:22.320 8790.773 - 8843.412: 55.1235% ( 626) 00:10:22.320 8843.412 - 8896.051: 59.4404% ( 594) 00:10:22.320 8896.051 - 8948.691: 63.5102% ( 560) 00:10:22.320 8948.691 - 9001.330: 67.2820% ( 519) 00:10:22.320 9001.330 - 9053.969: 70.9302% ( 502) 00:10:22.320 9053.969 - 9106.609: 74.3895% ( 476) 00:10:22.320 9106.609 - 9159.248: 77.1802% ( 384) 00:10:22.320 9159.248 - 9211.888: 79.6294% ( 337) 00:10:22.320 9211.888 - 9264.527: 81.6061% ( 272) 00:10:22.320 9264.527 - 9317.166: 83.4738% ( 257) 00:10:22.320 9317.166 - 9369.806: 85.1962% ( 237) 00:10:22.320 9369.806 - 9422.445: 86.8241% ( 224) 00:10:22.320 9422.445 - 9475.084: 88.1541% ( 183) 00:10:22.320 9475.084 - 9527.724: 89.2369% ( 149) 00:10:22.320 9527.724 - 9580.363: 90.1744% ( 129) 00:10:22.320 9580.363 - 9633.002: 90.9956% ( 113) 00:10:22.320 9633.002 - 9685.642: 91.6134% ( 85) 00:10:22.320 9685.642 - 9738.281: 92.1730% ( 77) 00:10:22.320 9738.281 - 9790.920: 92.6453% ( 65) 00:10:22.320 9790.920 - 9843.560: 93.0887% ( 61) 00:10:22.320 9843.560 - 9896.199: 93.5102% ( 58) 00:10:22.320 9896.199 - 9948.839: 93.8590% ( 48) 00:10:22.320 9948.839 - 10001.478: 94.1134% ( 35) 00:10:22.320 10001.478 - 10054.117: 94.3023% ( 26) 00:10:22.320 10054.117 - 10106.757: 94.3823% ( 11) 00:10:22.320 10106.757 - 10159.396: 94.4695% ( 12) 00:10:22.320 10159.396 - 10212.035: 94.5567% ( 12) 00:10:22.320 10212.035 - 10264.675: 94.6076% ( 7) 00:10:22.320 10264.675 - 10317.314: 94.6584% ( 7) 00:10:22.320 10317.314 - 10369.953: 94.6875% ( 4) 00:10:22.320 10369.953 - 10422.593: 94.7384% ( 7) 00:10:22.320 10422.593 - 10475.232: 94.7965% ( 8) 00:10:22.320 10475.232 - 10527.871: 94.8619% ( 9) 00:10:22.320 10527.871 - 10580.511: 94.9273% ( 9) 00:10:22.320 10580.511 - 10633.150: 94.9855% ( 8) 00:10:22.320 10633.150 - 10685.790: 95.0436% ( 8) 00:10:22.320 10685.790 - 10738.429: 95.1017% ( 8) 00:10:22.320 10738.429 - 10791.068: 95.1308% ( 4) 00:10:22.320 10791.068 - 10843.708: 95.1672% ( 5) 00:10:22.320 10843.708 - 10896.347: 95.2035% ( 5) 00:10:22.320 10896.347 - 10948.986: 95.2326% ( 4) 00:10:22.320 10948.986 - 11001.626: 95.2689% ( 5) 00:10:22.320 11001.626 - 11054.265: 95.2980% ( 4) 00:10:22.320 11054.265 - 11106.904: 95.3416% ( 6) 00:10:22.320 11106.904 - 11159.544: 95.3779% ( 5) 00:10:22.320 11159.544 - 11212.183: 95.4070% ( 4) 00:10:22.320 11212.183 - 11264.822: 95.4433% ( 5) 00:10:22.320 11264.822 - 11317.462: 95.4724% ( 4) 00:10:22.320 11317.462 - 11370.101: 95.5087% ( 5) 00:10:22.320 11370.101 - 11422.741: 95.5451% ( 5) 00:10:22.320 11422.741 - 11475.380: 95.5814% ( 5) 00:10:22.320 11475.380 - 11528.019: 95.6105% ( 4) 00:10:22.320 11528.019 - 11580.659: 95.6468% ( 5) 00:10:22.320 11580.659 - 11633.298: 95.6759% ( 4) 00:10:22.320 11633.298 - 11685.937: 95.7122% ( 5) 00:10:22.320 11685.937 - 11738.577: 95.7413% ( 4) 00:10:22.320 11738.577 - 11791.216: 95.7849% ( 6) 00:10:22.320 11791.216 - 11843.855: 95.8067% ( 3) 00:10:22.320 11843.855 - 11896.495: 95.8140% ( 1) 00:10:22.320 12107.052 - 12159.692: 95.8212% ( 1) 00:10:22.320 12159.692 - 12212.331: 95.8358% ( 2) 00:10:22.320 12212.331 - 12264.970: 95.8503% ( 2) 00:10:22.320 12264.970 - 12317.610: 95.8576% ( 1) 00:10:22.320 12317.610 - 12370.249: 95.8721% ( 2) 00:10:22.320 12370.249 - 12422.888: 95.8794% ( 1) 00:10:22.320 12422.888 - 12475.528: 95.8939% ( 2) 00:10:22.320 12475.528 - 12528.167: 95.9375% ( 6) 00:10:22.320 12528.167 - 12580.806: 95.9666% ( 4) 00:10:22.320 12580.806 - 12633.446: 95.9956% ( 4) 00:10:22.320 12633.446 - 12686.085: 96.0320% ( 5) 00:10:22.320 12686.085 - 12738.724: 96.0610% ( 4) 00:10:22.320 12738.724 - 12791.364: 96.0901% ( 4) 00:10:22.320 12791.364 - 12844.003: 96.1265% ( 5) 00:10:22.320 12844.003 - 12896.643: 96.1628% ( 5) 00:10:22.320 12896.643 - 12949.282: 96.1846% ( 3) 00:10:22.320 12949.282 - 13001.921: 96.2137% ( 4) 00:10:22.320 13001.921 - 13054.561: 96.2500% ( 5) 00:10:22.320 13054.561 - 13107.200: 96.2718% ( 3) 00:10:22.320 13107.200 - 13159.839: 96.3009% ( 4) 00:10:22.320 13159.839 - 13212.479: 96.3299% ( 4) 00:10:22.320 13212.479 - 13265.118: 96.3590% ( 4) 00:10:22.320 13265.118 - 13317.757: 96.3953% ( 5) 00:10:22.320 13317.757 - 13370.397: 96.4244% ( 4) 00:10:22.320 13370.397 - 13423.036: 96.4535% ( 4) 00:10:22.320 13423.036 - 13475.676: 96.4826% ( 4) 00:10:22.320 13475.676 - 13580.954: 96.5334% ( 7) 00:10:22.320 13580.954 - 13686.233: 96.5916% ( 8) 00:10:22.320 13686.233 - 13791.512: 96.6424% ( 7) 00:10:22.320 13791.512 - 13896.790: 96.6642% ( 3) 00:10:22.320 13896.790 - 14002.069: 96.6860% ( 3) 00:10:22.320 14002.069 - 14107.348: 96.7224% ( 5) 00:10:22.320 14107.348 - 14212.627: 96.7660% ( 6) 00:10:22.320 14212.627 - 14317.905: 96.8169% ( 7) 00:10:22.320 14317.905 - 14423.184: 96.8605% ( 6) 00:10:22.320 14423.184 - 14528.463: 96.9404% ( 11) 00:10:22.320 14528.463 - 14633.741: 97.0349% ( 13) 00:10:22.320 14633.741 - 14739.020: 97.1076% ( 10) 00:10:22.320 14739.020 - 14844.299: 97.1948% ( 12) 00:10:22.320 14844.299 - 14949.578: 97.2747% ( 11) 00:10:22.320 14949.578 - 15054.856: 97.3474% ( 10) 00:10:22.320 15054.856 - 15160.135: 97.4346% ( 12) 00:10:22.320 15160.135 - 15265.414: 97.5145% ( 11) 00:10:22.320 15265.414 - 15370.692: 97.5727% ( 8) 00:10:22.320 15370.692 - 15475.971: 97.5945% ( 3) 00:10:22.320 15475.971 - 15581.250: 97.6308% ( 5) 00:10:22.320 15581.250 - 15686.529: 97.6962% ( 9) 00:10:22.320 15686.529 - 15791.807: 97.7544% ( 8) 00:10:22.320 15791.807 - 15897.086: 97.7907% ( 5) 00:10:22.320 15897.086 - 16002.365: 97.8343% ( 6) 00:10:22.320 16002.365 - 16107.643: 97.8706% ( 5) 00:10:22.320 16107.643 - 16212.922: 97.9215% ( 7) 00:10:22.320 16212.922 - 16318.201: 97.9651% ( 6) 00:10:22.320 16318.201 - 16423.480: 98.0015% ( 5) 00:10:22.320 16423.480 - 16528.758: 98.0451% ( 6) 00:10:22.320 16528.758 - 16634.037: 98.0887% ( 6) 00:10:22.320 16634.037 - 16739.316: 98.1250% ( 5) 00:10:22.320 16739.316 - 16844.594: 98.1395% ( 2) 00:10:22.320 17265.709 - 17370.988: 98.1831% ( 6) 00:10:22.320 17370.988 - 17476.267: 98.2122% ( 4) 00:10:22.320 17476.267 - 17581.545: 98.2485% ( 5) 00:10:22.320 17581.545 - 17686.824: 98.2994% ( 7) 00:10:22.320 17686.824 - 17792.103: 98.3358% ( 5) 00:10:22.320 17792.103 - 17897.382: 98.3721% ( 5) 00:10:22.320 17897.382 - 18002.660: 98.4084% ( 5) 00:10:22.320 18002.660 - 18107.939: 98.4448% ( 5) 00:10:22.320 18107.939 - 18213.218: 98.4811% ( 5) 00:10:22.320 18213.218 - 18318.496: 98.5247% ( 6) 00:10:22.320 18318.496 - 18423.775: 98.5683% ( 6) 00:10:22.320 18423.775 - 18529.054: 98.6047% ( 5) 00:10:22.320 19160.726 - 19266.005: 98.6337% ( 4) 00:10:22.320 19266.005 - 19371.284: 98.6773% ( 6) 00:10:22.320 19371.284 - 19476.562: 98.7209% ( 6) 00:10:22.320 19476.562 - 19581.841: 98.7718% ( 7) 00:10:22.320 19581.841 - 19687.120: 98.8154% ( 6) 00:10:22.320 19687.120 - 19792.398: 98.8663% ( 7) 00:10:22.320 19792.398 - 19897.677: 98.9172% ( 7) 00:10:22.320 19897.677 - 20002.956: 98.9680% ( 7) 00:10:22.320 20002.956 - 20108.235: 99.0189% ( 7) 00:10:22.320 20108.235 - 20213.513: 99.0625% ( 6) 00:10:22.320 20213.513 - 20318.792: 99.0698% ( 1) 00:10:22.320 33478.631 - 33689.189: 99.0916% ( 3) 00:10:22.320 33689.189 - 33899.746: 99.1642% ( 10) 00:10:22.320 33899.746 - 34110.304: 99.2587% ( 13) 00:10:22.320 34110.304 - 34320.861: 99.3605% ( 14) 00:10:22.320 34320.861 - 34531.418: 99.4622% ( 14) 00:10:22.320 34531.418 - 34741.976: 99.5349% ( 10) 00:10:22.320 38953.124 - 39163.682: 99.5640% ( 4) 00:10:22.320 39163.682 - 39374.239: 99.6584% ( 13) 00:10:22.320 39374.239 - 39584.797: 99.7602% ( 14) 00:10:22.320 39584.797 - 39795.354: 99.8619% ( 14) 00:10:22.320 39795.354 - 40005.912: 99.9637% ( 14) 00:10:22.320 40005.912 - 40216.469: 100.0000% ( 5) 00:10:22.320 00:10:22.320 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:22.320 ============================================================================== 00:10:22.320 Range in us Cumulative IO count 00:10:22.320 4290.108 - 4316.427: 0.0072% ( 1) 00:10:22.320 4316.427 - 4342.747: 0.0362% ( 4) 00:10:22.320 4342.747 - 4369.067: 0.0506% ( 2) 00:10:22.320 4369.067 - 4395.386: 0.0579% ( 1) 00:10:22.320 4395.386 - 4421.706: 0.0723% ( 2) 00:10:22.320 4421.706 - 4448.026: 0.0940% ( 3) 00:10:22.320 4448.026 - 4474.345: 0.1085% ( 2) 00:10:22.320 4474.345 - 4500.665: 0.1302% ( 3) 00:10:22.320 4500.665 - 4526.985: 0.1519% ( 3) 00:10:22.320 4526.985 - 4553.304: 0.1664% ( 2) 00:10:22.320 4553.304 - 4579.624: 0.1736% ( 1) 00:10:22.320 4579.624 - 4605.944: 0.1881% ( 2) 00:10:22.320 4605.944 - 4632.263: 0.2025% ( 2) 00:10:22.320 4632.263 - 4658.583: 0.2170% ( 2) 00:10:22.320 4658.583 - 4684.903: 0.2315% ( 2) 00:10:22.320 4684.903 - 4711.222: 0.2459% ( 2) 00:10:22.320 4711.222 - 4737.542: 0.2604% ( 2) 00:10:22.320 4737.542 - 4763.862: 0.2749% ( 2) 00:10:22.320 4763.862 - 4790.182: 0.2894% ( 2) 00:10:22.320 4790.182 - 4816.501: 0.3038% ( 2) 00:10:22.320 4816.501 - 4842.821: 0.3255% ( 3) 00:10:22.320 4842.821 - 4869.141: 0.3400% ( 2) 00:10:22.320 4869.141 - 4895.460: 0.3545% ( 2) 00:10:22.320 4895.460 - 4921.780: 0.3689% ( 2) 00:10:22.320 4921.780 - 4948.100: 0.3834% ( 2) 00:10:22.320 4948.100 - 4974.419: 0.4051% ( 3) 00:10:22.320 4974.419 - 5000.739: 0.4196% ( 2) 00:10:22.320 5000.739 - 5027.059: 0.4340% ( 2) 00:10:22.320 5027.059 - 5053.378: 0.4485% ( 2) 00:10:22.320 5053.378 - 5079.698: 0.4630% ( 2) 00:10:22.320 6685.198 - 6711.518: 0.4919% ( 4) 00:10:22.320 6711.518 - 6737.838: 0.4991% ( 1) 00:10:22.320 6737.838 - 6790.477: 0.5353% ( 5) 00:10:22.320 6790.477 - 6843.116: 0.5642% ( 4) 00:10:22.321 6843.116 - 6895.756: 0.6004% ( 5) 00:10:22.321 6895.756 - 6948.395: 0.6293% ( 4) 00:10:22.321 6948.395 - 7001.035: 0.6655% ( 5) 00:10:22.321 7001.035 - 7053.674: 0.6944% ( 4) 00:10:22.321 7053.674 - 7106.313: 0.7234% ( 4) 00:10:22.321 7106.313 - 7158.953: 0.7595% ( 5) 00:10:22.321 7158.953 - 7211.592: 0.7885% ( 4) 00:10:22.321 7211.592 - 7264.231: 0.8174% ( 4) 00:10:22.321 7264.231 - 7316.871: 0.8536% ( 5) 00:10:22.321 7316.871 - 7369.510: 0.8825% ( 4) 00:10:22.321 7369.510 - 7422.149: 0.9187% ( 5) 00:10:22.321 7422.149 - 7474.789: 0.9259% ( 1) 00:10:22.321 7790.625 - 7843.264: 0.9332% ( 1) 00:10:22.321 7843.264 - 7895.904: 0.9983% ( 9) 00:10:22.321 7895.904 - 7948.543: 1.1429% ( 20) 00:10:22.321 7948.543 - 8001.182: 1.5336% ( 54) 00:10:22.321 8001.182 - 8053.822: 2.2931% ( 105) 00:10:22.321 8053.822 - 8106.461: 3.5012% ( 167) 00:10:22.321 8106.461 - 8159.100: 4.9262% ( 197) 00:10:22.321 8159.100 - 8211.740: 6.7419% ( 251) 00:10:22.321 8211.740 - 8264.379: 9.0856% ( 324) 00:10:22.321 8264.379 - 8317.018: 11.8851% ( 387) 00:10:22.321 8317.018 - 8369.658: 15.1476% ( 451) 00:10:22.321 8369.658 - 8422.297: 19.0177% ( 535) 00:10:22.321 8422.297 - 8474.937: 22.5911% ( 494) 00:10:22.321 8474.937 - 8527.576: 26.7578% ( 576) 00:10:22.321 8527.576 - 8580.215: 30.8738% ( 569) 00:10:22.321 8580.215 - 8632.855: 35.3588% ( 620) 00:10:22.321 8632.855 - 8685.494: 40.2054% ( 670) 00:10:22.321 8685.494 - 8738.133: 45.0014% ( 663) 00:10:22.321 8738.133 - 8790.773: 49.7034% ( 650) 00:10:22.321 8790.773 - 8843.412: 54.3113% ( 637) 00:10:22.321 8843.412 - 8896.051: 58.5214% ( 582) 00:10:22.321 8896.051 - 8948.691: 62.4928% ( 549) 00:10:22.321 8948.691 - 9001.330: 66.3339% ( 531) 00:10:22.321 9001.330 - 9053.969: 70.0883% ( 519) 00:10:22.321 9053.969 - 9106.609: 73.3869% ( 456) 00:10:22.321 9106.609 - 9159.248: 76.2948% ( 402) 00:10:22.321 9159.248 - 9211.888: 78.5663% ( 314) 00:10:22.321 9211.888 - 9264.527: 80.6785% ( 292) 00:10:22.321 9264.527 - 9317.166: 82.5159% ( 254) 00:10:22.321 9317.166 - 9369.806: 84.2159% ( 235) 00:10:22.321 9369.806 - 9422.445: 85.8941% ( 232) 00:10:22.321 9422.445 - 9475.084: 87.3264% ( 198) 00:10:22.321 9475.084 - 9527.724: 88.4693% ( 158) 00:10:22.321 9527.724 - 9580.363: 89.3953% ( 128) 00:10:22.321 9580.363 - 9633.002: 90.1982% ( 111) 00:10:22.321 9633.002 - 9685.642: 90.8709% ( 93) 00:10:22.321 9685.642 - 9738.281: 91.4641% ( 82) 00:10:22.321 9738.281 - 9790.920: 92.0211% ( 77) 00:10:22.321 9790.920 - 9843.560: 92.5130% ( 68) 00:10:22.321 9843.560 - 9896.199: 92.9905% ( 66) 00:10:22.321 9896.199 - 9948.839: 93.3883% ( 55) 00:10:22.321 9948.839 - 10001.478: 93.6560% ( 37) 00:10:22.321 10001.478 - 10054.117: 93.9236% ( 37) 00:10:22.321 10054.117 - 10106.757: 94.0611% ( 19) 00:10:22.321 10106.757 - 10159.396: 94.2057% ( 20) 00:10:22.321 10159.396 - 10212.035: 94.3142% ( 15) 00:10:22.321 10212.035 - 10264.675: 94.4300% ( 16) 00:10:22.321 10264.675 - 10317.314: 94.5240% ( 13) 00:10:22.321 10317.314 - 10369.953: 94.5891% ( 9) 00:10:22.321 10369.953 - 10422.593: 94.6398% ( 7) 00:10:22.321 10422.593 - 10475.232: 94.7049% ( 9) 00:10:22.321 10475.232 - 10527.871: 94.7627% ( 8) 00:10:22.321 10527.871 - 10580.511: 94.8134% ( 7) 00:10:22.321 10580.511 - 10633.150: 94.8568% ( 6) 00:10:22.321 10633.150 - 10685.790: 94.8857% ( 4) 00:10:22.321 10685.790 - 10738.429: 94.9219% ( 5) 00:10:22.321 10738.429 - 10791.068: 94.9580% ( 5) 00:10:22.321 10791.068 - 10843.708: 94.9942% ( 5) 00:10:22.321 10843.708 - 10896.347: 95.0376% ( 6) 00:10:22.321 10896.347 - 10948.986: 95.0738% ( 5) 00:10:22.321 10948.986 - 11001.626: 95.1100% ( 5) 00:10:22.321 11001.626 - 11054.265: 95.1534% ( 6) 00:10:22.321 11054.265 - 11106.904: 95.1895% ( 5) 00:10:22.321 11106.904 - 11159.544: 95.2329% ( 6) 00:10:22.321 11159.544 - 11212.183: 95.2691% ( 5) 00:10:22.321 11212.183 - 11264.822: 95.3270% ( 8) 00:10:22.321 11264.822 - 11317.462: 95.3704% ( 6) 00:10:22.321 11317.462 - 11370.101: 95.4282% ( 8) 00:10:22.321 11370.101 - 11422.741: 95.4716% ( 6) 00:10:22.321 11422.741 - 11475.380: 95.5295% ( 8) 00:10:22.321 11475.380 - 11528.019: 95.5729% ( 6) 00:10:22.321 11528.019 - 11580.659: 95.6308% ( 8) 00:10:22.321 11580.659 - 11633.298: 95.6597% ( 4) 00:10:22.321 11633.298 - 11685.937: 95.6887% ( 4) 00:10:22.321 11685.937 - 11738.577: 95.7031% ( 2) 00:10:22.321 11738.577 - 11791.216: 95.7176% ( 2) 00:10:22.321 11791.216 - 11843.855: 95.7321% ( 2) 00:10:22.321 11843.855 - 11896.495: 95.7827% ( 7) 00:10:22.321 11896.495 - 11949.134: 95.8333% ( 7) 00:10:22.321 11949.134 - 12001.773: 95.8840% ( 7) 00:10:22.321 12001.773 - 12054.413: 95.9201% ( 5) 00:10:22.321 12054.413 - 12107.052: 95.9563% ( 5) 00:10:22.321 12107.052 - 12159.692: 95.9997% ( 6) 00:10:22.321 12159.692 - 12212.331: 96.0431% ( 6) 00:10:22.321 12212.331 - 12264.970: 96.0938% ( 7) 00:10:22.321 12264.970 - 12317.610: 96.1299% ( 5) 00:10:22.321 12317.610 - 12370.249: 96.1516% ( 3) 00:10:22.321 12370.249 - 12422.888: 96.1878% ( 5) 00:10:22.321 12422.888 - 12475.528: 96.2167% ( 4) 00:10:22.321 12475.528 - 12528.167: 96.2457% ( 4) 00:10:22.321 12528.167 - 12580.806: 96.2746% ( 4) 00:10:22.321 12580.806 - 12633.446: 96.3108% ( 5) 00:10:22.321 12633.446 - 12686.085: 96.3469% ( 5) 00:10:22.321 12686.085 - 12738.724: 96.3686% ( 3) 00:10:22.321 12738.724 - 12791.364: 96.4048% ( 5) 00:10:22.321 12791.364 - 12844.003: 96.4337% ( 4) 00:10:22.321 12844.003 - 12896.643: 96.4627% ( 4) 00:10:22.321 12896.643 - 12949.282: 96.4916% ( 4) 00:10:22.321 12949.282 - 13001.921: 96.5278% ( 5) 00:10:22.321 13001.921 - 13054.561: 96.5567% ( 4) 00:10:22.321 13054.561 - 13107.200: 96.5929% ( 5) 00:10:22.321 13107.200 - 13159.839: 96.6073% ( 2) 00:10:22.321 13159.839 - 13212.479: 96.6218% ( 2) 00:10:22.321 13212.479 - 13265.118: 96.6291% ( 1) 00:10:22.321 13265.118 - 13317.757: 96.6435% ( 2) 00:10:22.321 13317.757 - 13370.397: 96.6580% ( 2) 00:10:22.321 13370.397 - 13423.036: 96.6725% ( 2) 00:10:22.321 13423.036 - 13475.676: 96.6797% ( 1) 00:10:22.321 13475.676 - 13580.954: 96.7086% ( 4) 00:10:22.321 13580.954 - 13686.233: 96.7303% ( 3) 00:10:22.321 13686.233 - 13791.512: 96.7520% ( 3) 00:10:22.321 13791.512 - 13896.790: 96.7593% ( 1) 00:10:22.321 14212.627 - 14317.905: 96.7665% ( 1) 00:10:22.321 14317.905 - 14423.184: 96.7954% ( 4) 00:10:22.321 14423.184 - 14528.463: 96.8099% ( 2) 00:10:22.321 14528.463 - 14633.741: 96.8316% ( 3) 00:10:22.321 14633.741 - 14739.020: 96.8605% ( 4) 00:10:22.321 14739.020 - 14844.299: 96.9184% ( 8) 00:10:22.321 14844.299 - 14949.578: 96.9690% ( 7) 00:10:22.321 14949.578 - 15054.856: 97.0269% ( 8) 00:10:22.321 15054.856 - 15160.135: 97.0775% ( 7) 00:10:22.321 15160.135 - 15265.414: 97.2439% ( 23) 00:10:22.321 15265.414 - 15370.692: 97.3741% ( 18) 00:10:22.321 15370.692 - 15475.971: 97.5405% ( 23) 00:10:22.321 15475.971 - 15581.250: 97.6780% ( 19) 00:10:22.321 15581.250 - 15686.529: 97.8154% ( 19) 00:10:22.321 15686.529 - 15791.807: 97.9384% ( 17) 00:10:22.321 15791.807 - 15897.086: 98.0613% ( 17) 00:10:22.321 15897.086 - 16002.365: 98.2060% ( 20) 00:10:22.321 16002.365 - 16107.643: 98.3073% ( 14) 00:10:22.321 16107.643 - 16212.922: 98.3869% ( 11) 00:10:22.321 16212.922 - 16318.201: 98.4737% ( 12) 00:10:22.321 16318.201 - 16423.480: 98.5026% ( 4) 00:10:22.321 16423.480 - 16528.758: 98.5171% ( 2) 00:10:22.321 16528.758 - 16634.037: 98.5388% ( 3) 00:10:22.321 16634.037 - 16739.316: 98.5532% ( 2) 00:10:22.321 16739.316 - 16844.594: 98.5677% ( 2) 00:10:22.321 16844.594 - 16949.873: 98.5822% ( 2) 00:10:22.321 16949.873 - 17055.152: 98.5966% ( 2) 00:10:22.321 17055.152 - 17160.431: 98.6039% ( 1) 00:10:22.321 17160.431 - 17265.709: 98.6111% ( 1) 00:10:22.321 18950.169 - 19055.447: 98.6183% ( 1) 00:10:22.321 19055.447 - 19160.726: 98.6762% ( 8) 00:10:22.321 19160.726 - 19266.005: 98.7269% ( 7) 00:10:22.321 19266.005 - 19371.284: 98.7703% ( 6) 00:10:22.321 19371.284 - 19476.562: 98.8209% ( 7) 00:10:22.321 19476.562 - 19581.841: 98.8715% ( 7) 00:10:22.321 19581.841 - 19687.120: 98.9222% ( 7) 00:10:22.321 19687.120 - 19792.398: 98.9728% ( 7) 00:10:22.321 19792.398 - 19897.677: 99.0162% ( 6) 00:10:22.321 19897.677 - 20002.956: 99.0596% ( 6) 00:10:22.321 20002.956 - 20108.235: 99.0741% ( 2) 00:10:22.321 28004.138 - 28214.696: 99.0958% ( 3) 00:10:22.321 28214.696 - 28425.253: 99.1970% ( 14) 00:10:22.321 28425.253 - 28635.810: 99.2983% ( 14) 00:10:22.321 28635.810 - 28846.368: 99.4068% ( 15) 00:10:22.321 28846.368 - 29056.925: 99.5081% ( 14) 00:10:22.321 29056.925 - 29267.483: 99.5370% ( 4) 00:10:22.321 33689.189 - 33899.746: 99.5660% ( 4) 00:10:22.321 33899.746 - 34110.304: 99.6672% ( 14) 00:10:22.321 34110.304 - 34320.861: 99.7685% ( 14) 00:10:22.321 34320.861 - 34531.418: 99.8698% ( 14) 00:10:22.321 34531.418 - 34741.976: 99.9711% ( 14) 00:10:22.321 34741.976 - 34952.533: 100.0000% ( 4) 00:10:22.321 00:10:22.321 01:20:07 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:23.257 Initializing NVMe Controllers 00:10:23.257 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.257 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.257 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.257 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.257 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:23.257 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:23.257 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:23.257 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:23.257 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:23.257 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:23.257 Initialization complete. Launching workers. 00:10:23.257 ======================================================== 00:10:23.257 Latency(us) 00:10:23.257 Device Information : IOPS MiB/s Average min max 00:10:23.257 PCIE (0000:00:10.0) NSID 1 from core 0: 14648.25 171.66 8744.30 4961.31 34709.64 00:10:23.257 PCIE (0000:00:11.0) NSID 1 from core 0: 14648.25 171.66 8735.49 6047.43 34134.19 00:10:23.257 PCIE (0000:00:13.0) NSID 1 from core 0: 14648.25 171.66 8726.22 5748.81 33916.34 00:10:23.257 PCIE (0000:00:12.0) NSID 1 from core 0: 14648.25 171.66 8716.51 5310.91 33357.90 00:10:23.257 PCIE (0000:00:12.0) NSID 2 from core 0: 14648.25 171.66 8706.93 5300.61 32640.43 00:10:23.257 PCIE (0000:00:12.0) NSID 3 from core 0: 14648.25 171.66 8697.77 5153.24 31949.15 00:10:23.257 ======================================================== 00:10:23.257 Total : 87889.51 1029.96 8721.20 4961.31 34709.64 00:10:23.257 00:10:23.257 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:23.257 ================================================================================= 00:10:23.257 1.00000% : 7158.953us 00:10:23.257 10.00000% : 7685.346us 00:10:23.257 25.00000% : 8053.822us 00:10:23.257 50.00000% : 8527.576us 00:10:23.257 75.00000% : 9053.969us 00:10:23.257 90.00000% : 9633.002us 00:10:23.257 95.00000% : 9896.199us 00:10:23.257 98.00000% : 10317.314us 00:10:23.257 99.00000% : 14844.299us 00:10:23.257 99.50000% : 25477.449us 00:10:23.257 99.90000% : 34531.418us 00:10:23.257 99.99000% : 34741.976us 00:10:23.257 99.99900% : 34741.976us 00:10:23.257 99.99990% : 34741.976us 00:10:23.257 99.99999% : 34741.976us 00:10:23.257 00:10:23.257 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:23.257 ================================================================================= 00:10:23.257 1.00000% : 7264.231us 00:10:23.257 10.00000% : 7632.707us 00:10:23.257 25.00000% : 8053.822us 00:10:23.257 50.00000% : 8527.576us 00:10:23.257 75.00000% : 9001.330us 00:10:23.257 90.00000% : 9633.002us 00:10:23.257 95.00000% : 9790.920us 00:10:23.257 98.00000% : 10633.150us 00:10:23.257 99.00000% : 13107.200us 00:10:23.257 99.50000% : 26424.957us 00:10:23.257 99.90000% : 33899.746us 00:10:23.257 99.99000% : 34110.304us 00:10:23.257 99.99900% : 34320.861us 00:10:23.257 99.99990% : 34320.861us 00:10:23.257 99.99999% : 34320.861us 00:10:23.257 00:10:23.257 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:23.257 ================================================================================= 00:10:23.257 1.00000% : 7106.313us 00:10:23.257 10.00000% : 7685.346us 00:10:23.257 25.00000% : 8053.822us 00:10:23.257 50.00000% : 8474.937us 00:10:23.257 75.00000% : 9053.969us 00:10:23.257 90.00000% : 9580.363us 00:10:23.257 95.00000% : 9790.920us 00:10:23.257 98.00000% : 10212.035us 00:10:23.257 99.00000% : 12317.610us 00:10:23.257 99.50000% : 27161.908us 00:10:23.257 99.90000% : 33689.189us 00:10:23.257 99.99000% : 33899.746us 00:10:23.258 99.99900% : 34110.304us 00:10:23.258 99.99990% : 34110.304us 00:10:23.258 99.99999% : 34110.304us 00:10:23.258 00:10:23.258 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:23.258 ================================================================================= 00:10:23.258 1.00000% : 7053.674us 00:10:23.258 10.00000% : 7685.346us 00:10:23.258 25.00000% : 8106.461us 00:10:23.258 50.00000% : 8474.937us 00:10:23.258 75.00000% : 9053.969us 00:10:23.258 90.00000% : 9580.363us 00:10:23.258 95.00000% : 9790.920us 00:10:23.258 98.00000% : 10264.675us 00:10:23.258 99.00000% : 12264.970us 00:10:23.258 99.50000% : 26740.794us 00:10:23.258 99.90000% : 33057.516us 00:10:23.258 99.99000% : 33478.631us 00:10:23.258 99.99900% : 33478.631us 00:10:23.258 99.99990% : 33478.631us 00:10:23.258 99.99999% : 33478.631us 00:10:23.258 00:10:23.258 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:23.258 ================================================================================= 00:10:23.258 1.00000% : 7106.313us 00:10:23.258 10.00000% : 7685.346us 00:10:23.258 25.00000% : 8053.822us 00:10:23.258 50.00000% : 8474.937us 00:10:23.258 75.00000% : 9001.330us 00:10:23.258 90.00000% : 9580.363us 00:10:23.258 95.00000% : 9790.920us 00:10:23.258 98.00000% : 10317.314us 00:10:23.258 99.00000% : 12054.413us 00:10:23.258 99.50000% : 26109.121us 00:10:23.258 99.90000% : 32425.844us 00:10:23.258 99.99000% : 32636.402us 00:10:23.258 99.99900% : 32846.959us 00:10:23.258 99.99990% : 32846.959us 00:10:23.258 99.99999% : 32846.959us 00:10:23.258 00:10:23.258 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:23.258 ================================================================================= 00:10:23.258 1.00000% : 7053.674us 00:10:23.258 10.00000% : 7685.346us 00:10:23.258 25.00000% : 8053.822us 00:10:23.258 50.00000% : 8527.576us 00:10:23.258 75.00000% : 9053.969us 00:10:23.258 90.00000% : 9580.363us 00:10:23.258 95.00000% : 9790.920us 00:10:23.258 98.00000% : 10264.675us 00:10:23.258 99.00000% : 11528.019us 00:10:23.258 99.50000% : 25688.006us 00:10:23.258 99.90000% : 31583.614us 00:10:23.258 99.99000% : 32004.729us 00:10:23.258 99.99900% : 32004.729us 00:10:23.258 99.99990% : 32004.729us 00:10:23.258 99.99999% : 32004.729us 00:10:23.258 00:10:23.258 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:23.258 ============================================================================== 00:10:23.258 Range in us Cumulative IO count 00:10:23.258 4948.100 - 4974.419: 0.0068% ( 1) 00:10:23.258 5921.928 - 5948.247: 0.0136% ( 1) 00:10:23.258 6000.887 - 6027.206: 0.0205% ( 1) 00:10:23.258 6079.846 - 6106.165: 0.0273% ( 1) 00:10:23.258 6106.165 - 6132.485: 0.0409% ( 2) 00:10:23.258 6132.485 - 6158.805: 0.0614% ( 3) 00:10:23.258 6158.805 - 6185.124: 0.0751% ( 2) 00:10:23.258 6185.124 - 6211.444: 0.0887% ( 2) 00:10:23.258 6211.444 - 6237.764: 0.1092% ( 3) 00:10:23.258 6237.764 - 6264.084: 0.1501% ( 6) 00:10:23.258 6264.084 - 6290.403: 0.1910% ( 6) 00:10:23.258 6290.403 - 6316.723: 0.2320% ( 6) 00:10:23.258 6316.723 - 6343.043: 0.2797% ( 7) 00:10:23.258 6343.043 - 6369.362: 0.3070% ( 4) 00:10:23.258 6369.362 - 6395.682: 0.3207% ( 2) 00:10:23.258 6395.682 - 6422.002: 0.3343% ( 2) 00:10:23.258 6422.002 - 6448.321: 0.3548% ( 3) 00:10:23.258 6448.321 - 6474.641: 0.3684% ( 2) 00:10:23.258 6474.641 - 6500.961: 0.3889% ( 3) 00:10:23.258 6500.961 - 6527.280: 0.4094% ( 3) 00:10:23.258 6527.280 - 6553.600: 0.4162% ( 1) 00:10:23.258 6553.600 - 6579.920: 0.4299% ( 2) 00:10:23.258 6737.838 - 6790.477: 0.4503% ( 3) 00:10:23.258 6790.477 - 6843.116: 0.4572% ( 1) 00:10:23.258 6843.116 - 6895.756: 0.4708% ( 2) 00:10:23.258 6895.756 - 6948.395: 0.5390% ( 10) 00:10:23.258 6948.395 - 7001.035: 0.6755% ( 20) 00:10:23.258 7001.035 - 7053.674: 0.7710% ( 14) 00:10:23.258 7053.674 - 7106.313: 0.9757% ( 30) 00:10:23.258 7106.313 - 7158.953: 1.1941% ( 32) 00:10:23.258 7158.953 - 7211.592: 1.4943% ( 44) 00:10:23.258 7211.592 - 7264.231: 1.9651% ( 69) 00:10:23.258 7264.231 - 7316.871: 2.7497% ( 115) 00:10:23.258 7316.871 - 7369.510: 3.5549% ( 118) 00:10:23.258 7369.510 - 7422.149: 4.8308% ( 187) 00:10:23.258 7422.149 - 7474.789: 5.7383% ( 133) 00:10:23.258 7474.789 - 7527.428: 6.8027% ( 156) 00:10:23.258 7527.428 - 7580.067: 8.2151% ( 207) 00:10:23.258 7580.067 - 7632.707: 9.5865% ( 201) 00:10:23.258 7632.707 - 7685.346: 10.9102% ( 194) 00:10:23.258 7685.346 - 7737.986: 12.5273% ( 237) 00:10:23.258 7737.986 - 7790.625: 14.3013% ( 260) 00:10:23.258 7790.625 - 7843.264: 16.2323% ( 283) 00:10:23.258 7843.264 - 7895.904: 18.3065% ( 304) 00:10:23.258 7895.904 - 7948.543: 20.7697% ( 361) 00:10:23.258 7948.543 - 8001.182: 23.5808% ( 412) 00:10:23.258 8001.182 - 8053.822: 26.1872% ( 382) 00:10:23.258 8053.822 - 8106.461: 29.4350% ( 476) 00:10:23.258 8106.461 - 8159.100: 32.5055% ( 450) 00:10:23.258 8159.100 - 8211.740: 35.4189% ( 427) 00:10:23.258 8211.740 - 8264.379: 38.4416% ( 443) 00:10:23.258 8264.379 - 8317.018: 40.9798% ( 372) 00:10:23.258 8317.018 - 8369.658: 43.5590% ( 378) 00:10:23.258 8369.658 - 8422.297: 46.4793% ( 428) 00:10:23.258 8422.297 - 8474.937: 49.3450% ( 420) 00:10:23.258 8474.937 - 8527.576: 52.0128% ( 391) 00:10:23.258 8527.576 - 8580.215: 54.4965% ( 364) 00:10:23.258 8580.215 - 8632.855: 57.0961% ( 381) 00:10:23.258 8632.855 - 8685.494: 59.7776% ( 393) 00:10:23.258 8685.494 - 8738.133: 62.4591% ( 393) 00:10:23.258 8738.133 - 8790.773: 65.3316% ( 421) 00:10:23.258 8790.773 - 8843.412: 68.1427% ( 412) 00:10:23.258 8843.412 - 8896.051: 70.0396% ( 278) 00:10:23.258 8896.051 - 8948.691: 72.3049% ( 332) 00:10:23.258 8948.691 - 9001.330: 74.3313% ( 297) 00:10:23.258 9001.330 - 9053.969: 76.0371% ( 250) 00:10:23.258 9053.969 - 9106.609: 77.6133% ( 231) 00:10:23.258 9106.609 - 9159.248: 78.8619% ( 183) 00:10:23.258 9159.248 - 9211.888: 80.3016% ( 211) 00:10:23.258 9211.888 - 9264.527: 81.8573% ( 228) 00:10:23.258 9264.527 - 9317.166: 83.1127% ( 184) 00:10:23.258 9317.166 - 9369.806: 84.3545% ( 182) 00:10:23.258 9369.806 - 9422.445: 85.6305% ( 187) 00:10:23.258 9422.445 - 9475.084: 87.1520% ( 223) 00:10:23.258 9475.084 - 9527.724: 88.5917% ( 211) 00:10:23.258 9527.724 - 9580.363: 89.9154% ( 194) 00:10:23.258 9580.363 - 9633.002: 91.2186% ( 191) 00:10:23.258 9633.002 - 9685.642: 92.1124% ( 131) 00:10:23.258 9685.642 - 9738.281: 93.1359% ( 150) 00:10:23.258 9738.281 - 9790.920: 94.1594% ( 150) 00:10:23.258 9790.920 - 9843.560: 94.8349% ( 99) 00:10:23.258 9843.560 - 9896.199: 95.3534% ( 76) 00:10:23.258 9896.199 - 9948.839: 95.7697% ( 61) 00:10:23.258 9948.839 - 10001.478: 96.2336% ( 68) 00:10:23.258 10001.478 - 10054.117: 96.6498% ( 61) 00:10:23.258 10054.117 - 10106.757: 96.9773% ( 48) 00:10:23.258 10106.757 - 10159.396: 97.3799% ( 59) 00:10:23.258 10159.396 - 10212.035: 97.6597% ( 41) 00:10:23.258 10212.035 - 10264.675: 97.8916% ( 34) 00:10:23.258 10264.675 - 10317.314: 98.0145% ( 18) 00:10:23.258 10317.314 - 10369.953: 98.0486% ( 5) 00:10:23.258 10369.953 - 10422.593: 98.0963% ( 7) 00:10:23.258 10422.593 - 10475.232: 98.1305% ( 5) 00:10:23.258 10475.232 - 10527.871: 98.1509% ( 3) 00:10:23.258 10527.871 - 10580.511: 98.1850% ( 5) 00:10:23.258 10580.511 - 10633.150: 98.2192% ( 5) 00:10:23.258 10633.150 - 10685.790: 98.2328% ( 2) 00:10:23.258 10685.790 - 10738.429: 98.2465% ( 2) 00:10:23.258 10791.068 - 10843.708: 98.2669% ( 3) 00:10:23.258 10843.708 - 10896.347: 98.2737% ( 1) 00:10:23.258 10896.347 - 10948.986: 98.2874% ( 2) 00:10:23.258 10948.986 - 11001.626: 98.3010% ( 2) 00:10:23.258 11001.626 - 11054.265: 98.3147% ( 2) 00:10:23.258 11054.265 - 11106.904: 98.3283% ( 2) 00:10:23.258 11106.904 - 11159.544: 98.3420% ( 2) 00:10:23.258 11159.544 - 11212.183: 98.3556% ( 2) 00:10:23.258 11212.183 - 11264.822: 98.3761% ( 3) 00:10:23.258 11264.822 - 11317.462: 98.3897% ( 2) 00:10:23.258 11317.462 - 11370.101: 98.3966% ( 1) 00:10:23.258 11370.101 - 11422.741: 98.4239% ( 4) 00:10:23.258 11422.741 - 11475.380: 98.4443% ( 3) 00:10:23.258 11475.380 - 11528.019: 98.4580% ( 2) 00:10:23.258 11528.019 - 11580.659: 98.4716% ( 2) 00:10:23.258 11580.659 - 11633.298: 98.4853% ( 2) 00:10:23.258 11633.298 - 11685.937: 98.4921% ( 1) 00:10:23.259 11685.937 - 11738.577: 98.5057% ( 2) 00:10:23.259 11738.577 - 11791.216: 98.5262% ( 3) 00:10:23.259 11791.216 - 11843.855: 98.5603% ( 5) 00:10:23.259 11843.855 - 11896.495: 98.6081% ( 7) 00:10:23.259 11896.495 - 11949.134: 98.6831% ( 11) 00:10:23.259 13475.676 - 13580.954: 98.6900% ( 1) 00:10:23.259 14107.348 - 14212.627: 98.6968% ( 1) 00:10:23.259 14212.627 - 14317.905: 98.7650% ( 10) 00:10:23.259 14317.905 - 14423.184: 98.8332% ( 10) 00:10:23.259 14423.184 - 14528.463: 98.9083% ( 11) 00:10:23.259 14528.463 - 14633.741: 98.9629% ( 8) 00:10:23.259 14633.741 - 14739.020: 98.9970% ( 5) 00:10:23.259 14739.020 - 14844.299: 99.0175% ( 3) 00:10:23.259 14844.299 - 14949.578: 99.0516% ( 5) 00:10:23.259 14949.578 - 15054.856: 99.0721% ( 3) 00:10:23.259 15054.856 - 15160.135: 99.0993% ( 4) 00:10:23.259 15160.135 - 15265.414: 99.1130% ( 2) 00:10:23.259 15265.414 - 15370.692: 99.1266% ( 2) 00:10:23.259 24108.826 - 24214.104: 99.1335% ( 1) 00:10:23.259 24214.104 - 24319.383: 99.1676% ( 5) 00:10:23.259 24319.383 - 24424.662: 99.1949% ( 4) 00:10:23.259 24424.662 - 24529.941: 99.2358% ( 6) 00:10:23.259 24529.941 - 24635.219: 99.2699% ( 5) 00:10:23.259 24635.219 - 24740.498: 99.2972% ( 4) 00:10:23.259 24740.498 - 24845.777: 99.3382% ( 6) 00:10:23.259 24845.777 - 24951.055: 99.3586% ( 3) 00:10:23.259 24951.055 - 25056.334: 99.3859% ( 4) 00:10:23.259 25056.334 - 25161.613: 99.4132% ( 4) 00:10:23.259 25161.613 - 25266.892: 99.4541% ( 6) 00:10:23.259 25266.892 - 25372.170: 99.4814% ( 4) 00:10:23.259 25372.170 - 25477.449: 99.5224% ( 6) 00:10:23.259 25477.449 - 25582.728: 99.5497% ( 4) 00:10:23.259 25582.728 - 25688.006: 99.5633% ( 2) 00:10:23.259 33057.516 - 33268.074: 99.6179% ( 8) 00:10:23.259 33268.074 - 33478.631: 99.6793% ( 9) 00:10:23.259 33478.631 - 33689.189: 99.7339% ( 8) 00:10:23.259 33689.189 - 33899.746: 99.7885% ( 8) 00:10:23.259 33899.746 - 34110.304: 99.8499% ( 9) 00:10:23.259 34110.304 - 34320.861: 99.8977% ( 7) 00:10:23.519 34320.861 - 34531.418: 99.9591% ( 9) 00:10:23.519 34531.418 - 34741.976: 100.0000% ( 6) 00:10:23.519 00:10:23.519 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:23.519 ============================================================================== 00:10:23.519 Range in us Cumulative IO count 00:10:23.519 6027.206 - 6053.526: 0.0068% ( 1) 00:10:23.519 6079.846 - 6106.165: 0.0136% ( 1) 00:10:23.519 6106.165 - 6132.485: 0.0273% ( 2) 00:10:23.519 6132.485 - 6158.805: 0.0409% ( 2) 00:10:23.519 6158.805 - 6185.124: 0.0614% ( 3) 00:10:23.519 6185.124 - 6211.444: 0.0887% ( 4) 00:10:23.519 6211.444 - 6237.764: 0.3207% ( 34) 00:10:23.519 6237.764 - 6264.084: 0.3821% ( 9) 00:10:23.519 6264.084 - 6290.403: 0.4094% ( 4) 00:10:23.519 6290.403 - 6316.723: 0.4230% ( 2) 00:10:23.519 6316.723 - 6343.043: 0.4367% ( 2) 00:10:23.519 7001.035 - 7053.674: 0.4435% ( 1) 00:10:23.519 7053.674 - 7106.313: 0.5049% ( 9) 00:10:23.519 7106.313 - 7158.953: 0.6346% ( 19) 00:10:23.519 7158.953 - 7211.592: 0.8597% ( 33) 00:10:23.519 7211.592 - 7264.231: 1.4602% ( 88) 00:10:23.519 7264.231 - 7316.871: 2.1288% ( 98) 00:10:23.519 7316.871 - 7369.510: 2.6747% ( 80) 00:10:23.519 7369.510 - 7422.149: 3.4252% ( 110) 00:10:23.519 7422.149 - 7474.789: 4.5920% ( 171) 00:10:23.519 7474.789 - 7527.428: 6.4069% ( 266) 00:10:23.519 7527.428 - 7580.067: 8.3106% ( 279) 00:10:23.519 7580.067 - 7632.707: 10.2279% ( 281) 00:10:23.519 7632.707 - 7685.346: 11.1558% ( 136) 00:10:23.519 7685.346 - 7737.986: 12.6092% ( 213) 00:10:23.519 7737.986 - 7790.625: 13.7555% ( 168) 00:10:23.519 7790.625 - 7843.264: 15.6318% ( 275) 00:10:23.519 7843.264 - 7895.904: 17.9380% ( 338) 00:10:23.519 7895.904 - 7948.543: 20.4285% ( 365) 00:10:23.519 7948.543 - 8001.182: 23.1782% ( 403) 00:10:23.519 8001.182 - 8053.822: 26.2350% ( 448) 00:10:23.519 8053.822 - 8106.461: 29.1416% ( 426) 00:10:23.519 8106.461 - 8159.100: 32.3485% ( 470) 00:10:23.519 8159.100 - 8211.740: 35.4189% ( 450) 00:10:23.519 8211.740 - 8264.379: 37.9299% ( 368) 00:10:23.519 8264.379 - 8317.018: 40.2907% ( 346) 00:10:23.519 8317.018 - 8369.658: 42.9449% ( 389) 00:10:23.519 8369.658 - 8422.297: 46.0289% ( 452) 00:10:23.519 8422.297 - 8474.937: 49.1198% ( 453) 00:10:23.519 8474.937 - 8527.576: 52.0742% ( 433) 00:10:23.519 8527.576 - 8580.215: 55.5336% ( 507) 00:10:23.519 8580.215 - 8632.855: 58.8223% ( 482) 00:10:23.519 8632.855 - 8685.494: 61.9337% ( 456) 00:10:23.519 8685.494 - 8738.133: 64.4855% ( 374) 00:10:23.519 8738.133 - 8790.773: 66.7576% ( 333) 00:10:23.519 8790.773 - 8843.412: 68.5862% ( 268) 00:10:23.519 8843.412 - 8896.051: 70.6605% ( 304) 00:10:23.519 8896.051 - 8948.691: 73.5671% ( 426) 00:10:23.519 8948.691 - 9001.330: 75.2729% ( 250) 00:10:23.519 9001.330 - 9053.969: 76.7808% ( 221) 00:10:23.519 9053.969 - 9106.609: 78.0704% ( 189) 00:10:23.519 9106.609 - 9159.248: 79.1144% ( 153) 00:10:23.519 9159.248 - 9211.888: 80.3630% ( 183) 00:10:23.519 9211.888 - 9264.527: 81.4888% ( 165) 00:10:23.519 9264.527 - 9317.166: 82.4713% ( 144) 00:10:23.519 9317.166 - 9369.806: 83.7541% ( 188) 00:10:23.519 9369.806 - 9422.445: 85.0096% ( 184) 00:10:23.519 9422.445 - 9475.084: 86.2582% ( 183) 00:10:23.519 9475.084 - 9527.724: 87.7320% ( 216) 00:10:23.519 9527.724 - 9580.363: 89.2672% ( 225) 00:10:23.519 9580.363 - 9633.002: 91.0822% ( 266) 00:10:23.519 9633.002 - 9685.642: 92.9380% ( 272) 00:10:23.519 9685.642 - 9738.281: 94.1184% ( 173) 00:10:23.519 9738.281 - 9790.920: 95.1487% ( 151) 00:10:23.519 9790.920 - 9843.560: 95.7833% ( 93) 00:10:23.519 9843.560 - 9896.199: 96.3087% ( 77) 00:10:23.519 9896.199 - 9948.839: 96.8068% ( 73) 00:10:23.519 9948.839 - 10001.478: 97.3185% ( 75) 00:10:23.519 10001.478 - 10054.117: 97.5641% ( 36) 00:10:23.519 10054.117 - 10106.757: 97.6801% ( 17) 00:10:23.519 10106.757 - 10159.396: 97.7415% ( 9) 00:10:23.519 10159.396 - 10212.035: 97.7961% ( 8) 00:10:23.519 10212.035 - 10264.675: 97.8234% ( 4) 00:10:23.519 10369.953 - 10422.593: 97.8439% ( 3) 00:10:23.519 10422.593 - 10475.232: 97.8848% ( 6) 00:10:23.519 10475.232 - 10527.871: 97.9258% ( 6) 00:10:23.519 10527.871 - 10580.511: 97.9872% ( 9) 00:10:23.519 10580.511 - 10633.150: 98.1646% ( 26) 00:10:23.519 10633.150 - 10685.790: 98.1850% ( 3) 00:10:23.519 10685.790 - 10738.429: 98.2123% ( 4) 00:10:23.519 10738.429 - 10791.068: 98.2328% ( 3) 00:10:23.519 10791.068 - 10843.708: 98.2533% ( 3) 00:10:23.519 10843.708 - 10896.347: 98.2806% ( 4) 00:10:23.519 10896.347 - 10948.986: 98.2874% ( 1) 00:10:23.519 10948.986 - 11001.626: 98.3147% ( 4) 00:10:23.519 11001.626 - 11054.265: 98.3352% ( 3) 00:10:23.519 11054.265 - 11106.904: 98.3624% ( 4) 00:10:23.519 11106.904 - 11159.544: 98.5671% ( 30) 00:10:23.519 11159.544 - 11212.183: 98.6013% ( 5) 00:10:23.519 11212.183 - 11264.822: 98.6149% ( 2) 00:10:23.519 11264.822 - 11317.462: 98.6354% ( 3) 00:10:23.519 11317.462 - 11370.101: 98.6490% ( 2) 00:10:23.519 11370.101 - 11422.741: 98.6627% ( 2) 00:10:23.519 11422.741 - 11475.380: 98.6831% ( 3) 00:10:23.519 11475.380 - 11528.019: 98.6900% ( 1) 00:10:23.519 12738.724 - 12791.364: 98.7172% ( 4) 00:10:23.519 12791.364 - 12844.003: 98.7650% ( 7) 00:10:23.519 12844.003 - 12896.643: 98.8128% ( 7) 00:10:23.519 12896.643 - 12949.282: 98.8537% ( 6) 00:10:23.519 12949.282 - 13001.921: 98.8947% ( 6) 00:10:23.519 13001.921 - 13054.561: 98.9629% ( 10) 00:10:23.519 13054.561 - 13107.200: 99.0038% ( 6) 00:10:23.519 13107.200 - 13159.839: 99.0379% ( 5) 00:10:23.519 13159.839 - 13212.479: 99.0721% ( 5) 00:10:23.519 13212.479 - 13265.118: 99.1062% ( 5) 00:10:23.519 13265.118 - 13317.757: 99.1198% ( 2) 00:10:23.519 13317.757 - 13370.397: 99.1266% ( 1) 00:10:23.519 25161.613 - 25266.892: 99.1335% ( 1) 00:10:23.519 25266.892 - 25372.170: 99.1744% ( 6) 00:10:23.519 25372.170 - 25477.449: 99.2085% ( 5) 00:10:23.519 25477.449 - 25582.728: 99.2495% ( 6) 00:10:23.519 25582.728 - 25688.006: 99.2836% ( 5) 00:10:23.519 25688.006 - 25793.285: 99.3245% ( 6) 00:10:23.519 25793.285 - 25898.564: 99.3586% ( 5) 00:10:23.519 25898.564 - 26003.843: 99.3859% ( 4) 00:10:23.519 26003.843 - 26109.121: 99.4269% ( 6) 00:10:23.519 26109.121 - 26214.400: 99.4610% ( 5) 00:10:23.519 26214.400 - 26319.679: 99.4951% ( 5) 00:10:23.519 26319.679 - 26424.957: 99.5360% ( 6) 00:10:23.519 26424.957 - 26530.236: 99.5633% ( 4) 00:10:23.519 32636.402 - 32846.959: 99.5974% ( 5) 00:10:23.519 32846.959 - 33057.516: 99.6520% ( 8) 00:10:23.519 33057.516 - 33268.074: 99.7203% ( 10) 00:10:23.519 33268.074 - 33478.631: 99.7885% ( 10) 00:10:23.519 33478.631 - 33689.189: 99.8567% ( 10) 00:10:23.519 33689.189 - 33899.746: 99.9249% ( 10) 00:10:23.519 33899.746 - 34110.304: 99.9932% ( 10) 00:10:23.519 34110.304 - 34320.861: 100.0000% ( 1) 00:10:23.519 00:10:23.519 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:23.519 ============================================================================== 00:10:23.519 Range in us Cumulative IO count 00:10:23.519 5737.690 - 5764.010: 0.0136% ( 2) 00:10:23.519 5764.010 - 5790.329: 0.0409% ( 4) 00:10:23.519 5790.329 - 5816.649: 0.0614% ( 3) 00:10:23.519 5816.649 - 5842.969: 0.1023% ( 6) 00:10:23.519 5842.969 - 5869.288: 0.1433% ( 6) 00:10:23.519 5869.288 - 5895.608: 0.1774% ( 5) 00:10:23.519 5895.608 - 5921.928: 0.2525% ( 11) 00:10:23.519 5921.928 - 5948.247: 0.2934% ( 6) 00:10:23.519 5948.247 - 5974.567: 0.3139% ( 3) 00:10:23.519 5974.567 - 6000.887: 0.3412% ( 4) 00:10:23.519 6000.887 - 6027.206: 0.3684% ( 4) 00:10:23.519 6027.206 - 6053.526: 0.3957% ( 4) 00:10:23.519 6053.526 - 6079.846: 0.4230% ( 4) 00:10:23.519 6079.846 - 6106.165: 0.4367% ( 2) 00:10:23.519 6711.518 - 6737.838: 0.4435% ( 1) 00:10:23.519 6843.116 - 6895.756: 0.4844% ( 6) 00:10:23.519 6895.756 - 6948.395: 0.5595% ( 11) 00:10:23.519 6948.395 - 7001.035: 0.6755% ( 17) 00:10:23.519 7001.035 - 7053.674: 0.8802% ( 30) 00:10:23.519 7053.674 - 7106.313: 1.0985% ( 32) 00:10:23.519 7106.313 - 7158.953: 1.3919% ( 43) 00:10:23.520 7158.953 - 7211.592: 1.7126% ( 47) 00:10:23.520 7211.592 - 7264.231: 2.1220% ( 60) 00:10:23.520 7264.231 - 7316.871: 2.4973% ( 55) 00:10:23.520 7316.871 - 7369.510: 2.9408% ( 65) 00:10:23.520 7369.510 - 7422.149: 3.8551% ( 134) 00:10:23.520 7422.149 - 7474.789: 4.6397% ( 115) 00:10:23.520 7474.789 - 7527.428: 5.8952% ( 184) 00:10:23.520 7527.428 - 7580.067: 7.6487% ( 257) 00:10:23.520 7580.067 - 7632.707: 9.1294% ( 217) 00:10:23.520 7632.707 - 7685.346: 10.4053% ( 187) 00:10:23.520 7685.346 - 7737.986: 11.9337% ( 224) 00:10:23.520 7737.986 - 7790.625: 13.5098% ( 231) 00:10:23.520 7790.625 - 7843.264: 15.5704% ( 302) 00:10:23.520 7843.264 - 7895.904: 17.6037% ( 298) 00:10:23.520 7895.904 - 7948.543: 20.2033% ( 381) 00:10:23.520 7948.543 - 8001.182: 22.6528% ( 359) 00:10:23.520 8001.182 - 8053.822: 26.3510% ( 542) 00:10:23.520 8053.822 - 8106.461: 29.3941% ( 446) 00:10:23.520 8106.461 - 8159.100: 32.7716% ( 495) 00:10:23.520 8159.100 - 8211.740: 36.0808% ( 485) 00:10:23.520 8211.740 - 8264.379: 38.8100% ( 400) 00:10:23.520 8264.379 - 8317.018: 42.0306% ( 472) 00:10:23.520 8317.018 - 8369.658: 45.1829% ( 462) 00:10:23.520 8369.658 - 8422.297: 48.1305% ( 432) 00:10:23.520 8422.297 - 8474.937: 50.8529% ( 399) 00:10:23.520 8474.937 - 8527.576: 53.5480% ( 395) 00:10:23.520 8527.576 - 8580.215: 55.9907% ( 358) 00:10:23.520 8580.215 - 8632.855: 58.0650% ( 304) 00:10:23.520 8632.855 - 8685.494: 60.1324% ( 303) 00:10:23.520 8685.494 - 8738.133: 61.9405% ( 265) 00:10:23.520 8738.133 - 8790.773: 63.7828% ( 270) 00:10:23.520 8790.773 - 8843.412: 65.8433% ( 302) 00:10:23.520 8843.412 - 8896.051: 68.3884% ( 373) 00:10:23.520 8896.051 - 8948.691: 70.9675% ( 378) 00:10:23.520 8948.691 - 9001.330: 72.9258% ( 287) 00:10:23.520 9001.330 - 9053.969: 75.5731% ( 388) 00:10:23.520 9053.969 - 9106.609: 77.8180% ( 329) 00:10:23.520 9106.609 - 9159.248: 80.0559% ( 328) 00:10:23.520 9159.248 - 9211.888: 81.3251% ( 186) 00:10:23.520 9211.888 - 9264.527: 82.6897% ( 200) 00:10:23.520 9264.527 - 9317.166: 83.7473% ( 155) 00:10:23.520 9317.166 - 9369.806: 84.9754% ( 180) 00:10:23.520 9369.806 - 9422.445: 86.2309% ( 184) 00:10:23.520 9422.445 - 9475.084: 87.6296% ( 205) 00:10:23.520 9475.084 - 9527.724: 89.1990% ( 230) 00:10:23.520 9527.724 - 9580.363: 90.6045% ( 206) 00:10:23.520 9580.363 - 9633.002: 91.9623% ( 199) 00:10:23.520 9633.002 - 9685.642: 93.3747% ( 207) 00:10:23.520 9685.642 - 9738.281: 94.1935% ( 120) 00:10:23.520 9738.281 - 9790.920: 95.1965% ( 147) 00:10:23.520 9790.920 - 9843.560: 95.9198% ( 106) 00:10:23.520 9843.560 - 9896.199: 96.5407% ( 91) 00:10:23.520 9896.199 - 9948.839: 96.9705% ( 63) 00:10:23.520 9948.839 - 10001.478: 97.2230% ( 37) 00:10:23.520 10001.478 - 10054.117: 97.5164% ( 43) 00:10:23.520 10054.117 - 10106.757: 97.8371% ( 47) 00:10:23.520 10106.757 - 10159.396: 97.9599% ( 18) 00:10:23.520 10159.396 - 10212.035: 98.1032% ( 21) 00:10:23.520 10212.035 - 10264.675: 98.1714% ( 10) 00:10:23.520 10264.675 - 10317.314: 98.2123% ( 6) 00:10:23.520 10317.314 - 10369.953: 98.2396% ( 4) 00:10:23.520 10369.953 - 10422.593: 98.2533% ( 2) 00:10:23.520 10527.871 - 10580.511: 98.2806% ( 4) 00:10:23.520 10580.511 - 10633.150: 98.3010% ( 3) 00:10:23.520 10633.150 - 10685.790: 98.3352% ( 5) 00:10:23.520 10685.790 - 10738.429: 98.3761% ( 6) 00:10:23.520 10738.429 - 10791.068: 98.4102% ( 5) 00:10:23.520 10791.068 - 10843.708: 98.4648% ( 8) 00:10:23.520 10843.708 - 10896.347: 98.5330% ( 10) 00:10:23.520 10896.347 - 10948.986: 98.5603% ( 4) 00:10:23.520 10948.986 - 11001.626: 98.5944% ( 5) 00:10:23.520 11001.626 - 11054.265: 98.6354% ( 6) 00:10:23.520 11054.265 - 11106.904: 98.6627% ( 4) 00:10:23.520 11106.904 - 11159.544: 98.6900% ( 4) 00:10:23.520 11685.937 - 11738.577: 98.6968% ( 1) 00:10:23.520 11843.855 - 11896.495: 98.7036% ( 1) 00:10:23.520 11896.495 - 11949.134: 98.7104% ( 1) 00:10:23.520 12001.773 - 12054.413: 98.7172% ( 1) 00:10:23.520 12054.413 - 12107.052: 98.7582% ( 6) 00:10:23.520 12107.052 - 12159.692: 98.8128% ( 8) 00:10:23.520 12159.692 - 12212.331: 98.8537% ( 6) 00:10:23.520 12212.331 - 12264.970: 98.9970% ( 21) 00:10:23.520 12264.970 - 12317.610: 99.0448% ( 7) 00:10:23.520 12317.610 - 12370.249: 99.0789% ( 5) 00:10:23.520 12370.249 - 12422.888: 99.1130% ( 5) 00:10:23.520 12422.888 - 12475.528: 99.1266% ( 2) 00:10:23.520 26319.679 - 26424.957: 99.2017% ( 11) 00:10:23.520 26424.957 - 26530.236: 99.3040% ( 15) 00:10:23.520 26530.236 - 26635.515: 99.3859% ( 12) 00:10:23.520 26635.515 - 26740.794: 99.4132% ( 4) 00:10:23.520 26740.794 - 26846.072: 99.4405% ( 4) 00:10:23.520 26846.072 - 26951.351: 99.4678% ( 4) 00:10:23.520 26951.351 - 27161.908: 99.5156% ( 7) 00:10:23.520 27161.908 - 27372.466: 99.5633% ( 7) 00:10:23.520 32004.729 - 32215.287: 99.6452% ( 12) 00:10:23.520 32215.287 - 32425.844: 99.7203% ( 11) 00:10:23.520 32636.402 - 32846.959: 99.7407% ( 3) 00:10:23.520 32846.959 - 33057.516: 99.7885% ( 7) 00:10:23.520 33057.516 - 33268.074: 99.8294% ( 6) 00:10:23.520 33268.074 - 33478.631: 99.8772% ( 7) 00:10:23.520 33478.631 - 33689.189: 99.9249% ( 7) 00:10:23.520 33689.189 - 33899.746: 99.9932% ( 10) 00:10:23.520 33899.746 - 34110.304: 100.0000% ( 1) 00:10:23.520 00:10:23.520 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:23.520 ============================================================================== 00:10:23.520 Range in us Cumulative IO count 00:10:23.520 5290.255 - 5316.575: 0.0068% ( 1) 00:10:23.520 5395.534 - 5421.854: 0.0136% ( 1) 00:10:23.520 5448.173 - 5474.493: 0.0273% ( 2) 00:10:23.520 5474.493 - 5500.813: 0.0341% ( 1) 00:10:23.520 5500.813 - 5527.133: 0.0478% ( 2) 00:10:23.520 5527.133 - 5553.452: 0.0614% ( 2) 00:10:23.520 5553.452 - 5579.772: 0.0751% ( 2) 00:10:23.520 5579.772 - 5606.092: 0.1023% ( 4) 00:10:23.520 5606.092 - 5632.411: 0.1296% ( 4) 00:10:23.520 5632.411 - 5658.731: 0.1569% ( 4) 00:10:23.520 5658.731 - 5685.051: 0.2593% ( 15) 00:10:23.520 5685.051 - 5711.370: 0.3139% ( 8) 00:10:23.520 5711.370 - 5737.690: 0.3412% ( 4) 00:10:23.520 5737.690 - 5764.010: 0.3684% ( 4) 00:10:23.520 5764.010 - 5790.329: 0.3957% ( 4) 00:10:23.520 5790.329 - 5816.649: 0.4162% ( 3) 00:10:23.520 5816.649 - 5842.969: 0.4299% ( 2) 00:10:23.520 5842.969 - 5869.288: 0.4367% ( 1) 00:10:23.520 6606.239 - 6632.559: 0.4435% ( 1) 00:10:23.520 6632.559 - 6658.879: 0.4503% ( 1) 00:10:23.520 6658.879 - 6685.198: 0.4640% ( 2) 00:10:23.520 6685.198 - 6711.518: 0.4913% ( 4) 00:10:23.520 6711.518 - 6737.838: 0.5254% ( 5) 00:10:23.520 6737.838 - 6790.477: 0.5800% ( 8) 00:10:23.520 6790.477 - 6843.116: 0.7028% ( 18) 00:10:23.520 6843.116 - 6895.756: 0.7574% ( 8) 00:10:23.520 6895.756 - 6948.395: 0.8461% ( 13) 00:10:23.520 6948.395 - 7001.035: 0.9621% ( 17) 00:10:23.520 7001.035 - 7053.674: 1.0644% ( 15) 00:10:23.520 7053.674 - 7106.313: 1.2282% ( 24) 00:10:23.520 7106.313 - 7158.953: 1.3987% ( 25) 00:10:23.520 7158.953 - 7211.592: 1.7194% ( 47) 00:10:23.520 7211.592 - 7264.231: 2.2516% ( 78) 00:10:23.520 7264.231 - 7316.871: 2.7293% ( 70) 00:10:23.520 7316.871 - 7369.510: 3.5139% ( 115) 00:10:23.520 7369.510 - 7422.149: 4.6124% ( 161) 00:10:23.520 7422.149 - 7474.789: 5.6291% ( 149) 00:10:23.520 7474.789 - 7527.428: 6.7890% ( 170) 00:10:23.520 7527.428 - 7580.067: 8.0786% ( 189) 00:10:23.520 7580.067 - 7632.707: 9.3750% ( 190) 00:10:23.520 7632.707 - 7685.346: 10.6646% ( 189) 00:10:23.520 7685.346 - 7737.986: 12.1861% ( 223) 00:10:23.520 7737.986 - 7790.625: 14.4446% ( 331) 00:10:23.520 7790.625 - 7843.264: 15.9047% ( 214) 00:10:23.520 7843.264 - 7895.904: 17.6856% ( 261) 00:10:23.520 7895.904 - 7948.543: 19.8758% ( 321) 00:10:23.520 7948.543 - 8001.182: 22.1752% ( 337) 00:10:23.520 8001.182 - 8053.822: 24.9795% ( 411) 00:10:23.520 8053.822 - 8106.461: 27.9817% ( 440) 00:10:23.520 8106.461 - 8159.100: 30.9498% ( 435) 00:10:23.520 8159.100 - 8211.740: 34.1908% ( 475) 00:10:23.520 8211.740 - 8264.379: 37.3635% ( 465) 00:10:23.520 8264.379 - 8317.018: 40.4203% ( 448) 00:10:23.520 8317.018 - 8369.658: 43.8455% ( 502) 00:10:23.520 8369.658 - 8422.297: 47.2230% ( 495) 00:10:23.520 8422.297 - 8474.937: 50.1433% ( 428) 00:10:23.520 8474.937 - 8527.576: 53.1523% ( 441) 00:10:23.520 8527.576 - 8580.215: 55.6018% ( 359) 00:10:23.520 8580.215 - 8632.855: 57.6078% ( 294) 00:10:23.520 8632.855 - 8685.494: 60.0983% ( 365) 00:10:23.520 8685.494 - 8738.133: 62.3158% ( 325) 00:10:23.520 8738.133 - 8790.773: 64.6015% ( 335) 00:10:23.520 8790.773 - 8843.412: 66.9419% ( 343) 00:10:23.520 8843.412 - 8896.051: 69.3368% ( 351) 00:10:23.520 8896.051 - 8948.691: 71.6021% ( 332) 00:10:23.520 8948.691 - 9001.330: 74.0311% ( 356) 00:10:23.520 9001.330 - 9053.969: 76.0303% ( 293) 00:10:23.520 9053.969 - 9106.609: 78.0090% ( 290) 00:10:23.520 9106.609 - 9159.248: 79.7421% ( 254) 00:10:23.520 9159.248 - 9211.888: 81.4274% ( 247) 00:10:23.520 9211.888 - 9264.527: 83.2697% ( 270) 00:10:23.520 9264.527 - 9317.166: 84.7298% ( 214) 00:10:23.520 9317.166 - 9369.806: 86.2172% ( 218) 00:10:23.520 9369.806 - 9422.445: 87.4113% ( 175) 00:10:23.520 9422.445 - 9475.084: 88.5576% ( 168) 00:10:23.520 9475.084 - 9527.724: 89.6152% ( 155) 00:10:23.520 9527.724 - 9580.363: 91.0003% ( 203) 00:10:23.520 9580.363 - 9633.002: 92.2898% ( 189) 00:10:23.520 9633.002 - 9685.642: 93.2656% ( 143) 00:10:23.520 9685.642 - 9738.281: 94.2549% ( 145) 00:10:23.520 9738.281 - 9790.920: 95.4694% ( 178) 00:10:23.520 9790.920 - 9843.560: 96.0494% ( 85) 00:10:23.520 9843.560 - 9896.199: 96.8204% ( 113) 00:10:23.520 9896.199 - 9948.839: 97.0524% ( 34) 00:10:23.520 9948.839 - 10001.478: 97.2776% ( 33) 00:10:23.520 10001.478 - 10054.117: 97.4209% ( 21) 00:10:23.520 10054.117 - 10106.757: 97.5096% ( 13) 00:10:23.520 10106.757 - 10159.396: 97.8029% ( 43) 00:10:23.520 10159.396 - 10212.035: 97.8916% ( 13) 00:10:23.520 10212.035 - 10264.675: 98.1509% ( 38) 00:10:23.520 10264.675 - 10317.314: 98.2260% ( 11) 00:10:23.520 10317.314 - 10369.953: 98.3010% ( 11) 00:10:23.520 10369.953 - 10422.593: 98.3624% ( 9) 00:10:23.520 10422.593 - 10475.232: 98.4239% ( 9) 00:10:23.520 10475.232 - 10527.871: 98.5262% ( 15) 00:10:23.520 10527.871 - 10580.511: 98.5671% ( 6) 00:10:23.520 10580.511 - 10633.150: 98.6081% ( 6) 00:10:23.520 10633.150 - 10685.790: 98.6354% ( 4) 00:10:23.520 10685.790 - 10738.429: 98.6695% ( 5) 00:10:23.520 10738.429 - 10791.068: 98.6900% ( 3) 00:10:23.520 11685.937 - 11738.577: 98.6968% ( 1) 00:10:23.520 12054.413 - 12107.052: 98.7309% ( 5) 00:10:23.520 12107.052 - 12159.692: 98.7787% ( 7) 00:10:23.520 12159.692 - 12212.331: 98.8742% ( 14) 00:10:23.520 12212.331 - 12264.970: 99.0448% ( 25) 00:10:23.520 12264.970 - 12317.610: 99.0652% ( 3) 00:10:23.520 12317.610 - 12370.249: 99.0789% ( 2) 00:10:23.520 12370.249 - 12422.888: 99.0993% ( 3) 00:10:23.520 12422.888 - 12475.528: 99.1130% ( 2) 00:10:23.520 12475.528 - 12528.167: 99.1266% ( 2) 00:10:23.520 25688.006 - 25793.285: 99.1335% ( 1) 00:10:23.520 25793.285 - 25898.564: 99.2222% ( 13) 00:10:23.520 25898.564 - 26003.843: 99.2904% ( 10) 00:10:23.520 26003.843 - 26109.121: 99.3450% ( 8) 00:10:23.520 26109.121 - 26214.400: 99.3927% ( 7) 00:10:23.520 26214.400 - 26319.679: 99.4200% ( 4) 00:10:23.520 26319.679 - 26424.957: 99.4405% ( 3) 00:10:23.520 26424.957 - 26530.236: 99.4610% ( 3) 00:10:23.520 26530.236 - 26635.515: 99.4883% ( 4) 00:10:23.520 26635.515 - 26740.794: 99.5087% ( 3) 00:10:23.520 26740.794 - 26846.072: 99.5360% ( 4) 00:10:23.520 26846.072 - 26951.351: 99.5565% ( 3) 00:10:23.520 26951.351 - 27161.908: 99.5633% ( 1) 00:10:23.520 31583.614 - 31794.172: 99.5770% ( 2) 00:10:23.520 31794.172 - 32004.729: 99.6384% ( 9) 00:10:23.520 32004.729 - 32215.287: 99.7817% ( 21) 00:10:23.520 32215.287 - 32425.844: 99.8499% ( 10) 00:10:23.520 32425.844 - 32636.402: 99.8840% ( 5) 00:10:23.520 32846.959 - 33057.516: 99.9386% ( 8) 00:10:23.520 33057.516 - 33268.074: 99.9864% ( 7) 00:10:23.520 33268.074 - 33478.631: 100.0000% ( 2) 00:10:23.520 00:10:23.520 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:23.520 ============================================================================== 00:10:23.520 Range in us Cumulative IO count 00:10:23.520 5290.255 - 5316.575: 0.0068% ( 1) 00:10:23.520 5421.854 - 5448.173: 0.0205% ( 2) 00:10:23.520 5448.173 - 5474.493: 0.0546% ( 5) 00:10:23.520 5474.493 - 5500.813: 0.0887% ( 5) 00:10:23.520 5500.813 - 5527.133: 0.1296% ( 6) 00:10:23.520 5527.133 - 5553.452: 0.1774% ( 7) 00:10:23.520 5553.452 - 5579.772: 0.3002% ( 18) 00:10:23.520 5579.772 - 5606.092: 0.3275% ( 4) 00:10:23.520 5606.092 - 5632.411: 0.3548% ( 4) 00:10:23.520 5632.411 - 5658.731: 0.3889% ( 5) 00:10:23.520 5658.731 - 5685.051: 0.4026% ( 2) 00:10:23.520 5685.051 - 5711.370: 0.4162% ( 2) 00:10:23.520 5711.370 - 5737.690: 0.4299% ( 2) 00:10:23.520 5737.690 - 5764.010: 0.4367% ( 1) 00:10:23.520 6527.280 - 6553.600: 0.4503% ( 2) 00:10:23.520 6553.600 - 6579.920: 0.4844% ( 5) 00:10:23.520 6579.920 - 6606.239: 0.5186% ( 5) 00:10:23.520 6606.239 - 6632.559: 0.5459% ( 4) 00:10:23.520 6632.559 - 6658.879: 0.6277% ( 12) 00:10:23.520 6658.879 - 6685.198: 0.7574% ( 19) 00:10:23.520 6685.198 - 6711.518: 0.7847% ( 4) 00:10:23.520 6711.518 - 6737.838: 0.8120% ( 4) 00:10:23.520 6737.838 - 6790.477: 0.8529% ( 6) 00:10:23.520 6790.477 - 6843.116: 0.8665% ( 2) 00:10:23.520 6843.116 - 6895.756: 0.8734% ( 1) 00:10:23.520 6895.756 - 6948.395: 0.8802% ( 1) 00:10:23.520 7001.035 - 7053.674: 0.9075% ( 4) 00:10:23.520 7053.674 - 7106.313: 1.0098% ( 15) 00:10:23.520 7106.313 - 7158.953: 1.2486% ( 35) 00:10:23.520 7158.953 - 7211.592: 1.6444% ( 58) 00:10:23.520 7211.592 - 7264.231: 2.1698% ( 77) 00:10:23.520 7264.231 - 7316.871: 2.7838% ( 90) 00:10:23.520 7316.871 - 7369.510: 3.3024% ( 76) 00:10:23.520 7369.510 - 7422.149: 3.9097% ( 89) 00:10:23.520 7422.149 - 7474.789: 4.7830% ( 128) 00:10:23.520 7474.789 - 7527.428: 5.9498% ( 171) 00:10:23.520 7527.428 - 7580.067: 7.5464% ( 234) 00:10:23.520 7580.067 - 7632.707: 9.2112% ( 244) 00:10:23.520 7632.707 - 7685.346: 10.8010% ( 233) 00:10:23.520 7685.346 - 7737.986: 12.3840% ( 232) 00:10:23.520 7737.986 - 7790.625: 13.7213% ( 196) 00:10:23.520 7790.625 - 7843.264: 15.2156% ( 219) 00:10:23.520 7843.264 - 7895.904: 16.9078% ( 248) 00:10:23.520 7895.904 - 7948.543: 19.2617% ( 345) 00:10:23.520 7948.543 - 8001.182: 21.9159% ( 389) 00:10:23.520 8001.182 - 8053.822: 25.7028% ( 555) 00:10:23.520 8053.822 - 8106.461: 28.7323% ( 444) 00:10:23.520 8106.461 - 8159.100: 31.7344% ( 440) 00:10:23.520 8159.100 - 8211.740: 34.8253% ( 453) 00:10:23.520 8211.740 - 8264.379: 37.9026% ( 451) 00:10:23.520 8264.379 - 8317.018: 41.2459% ( 490) 00:10:23.520 8317.018 - 8369.658: 44.5620% ( 486) 00:10:23.520 8369.658 - 8422.297: 47.4823% ( 428) 00:10:23.520 8422.297 - 8474.937: 50.0000% ( 369) 00:10:23.520 8474.937 - 8527.576: 52.4768% ( 363) 00:10:23.520 8527.576 - 8580.215: 55.1242% ( 388) 00:10:23.520 8580.215 - 8632.855: 57.8534% ( 400) 00:10:23.520 8632.855 - 8685.494: 60.0164% ( 317) 00:10:23.520 8685.494 - 8738.133: 62.8548% ( 416) 00:10:23.520 8738.133 - 8790.773: 65.2838% ( 356) 00:10:23.520 8790.773 - 8843.412: 67.8493% ( 376) 00:10:23.520 8843.412 - 8896.051: 70.5445% ( 395) 00:10:23.520 8896.051 - 8948.691: 73.2396% ( 395) 00:10:23.520 8948.691 - 9001.330: 75.4913% ( 330) 00:10:23.521 9001.330 - 9053.969: 77.6269% ( 313) 00:10:23.521 9053.969 - 9106.609: 79.3532% ( 253) 00:10:23.521 9106.609 - 9159.248: 80.6905% ( 196) 00:10:23.521 9159.248 - 9211.888: 81.9937% ( 191) 00:10:23.521 9211.888 - 9264.527: 83.0172% ( 150) 00:10:23.521 9264.527 - 9317.166: 84.2181% ( 176) 00:10:23.521 9317.166 - 9369.806: 85.1597% ( 138) 00:10:23.521 9369.806 - 9422.445: 86.4288% ( 186) 00:10:23.521 9422.445 - 9475.084: 87.7252% ( 190) 00:10:23.521 9475.084 - 9527.724: 89.1512% ( 209) 00:10:23.521 9527.724 - 9580.363: 90.7410% ( 233) 00:10:23.521 9580.363 - 9633.002: 92.0579% ( 193) 00:10:23.521 9633.002 - 9685.642: 93.1837% ( 165) 00:10:23.521 9685.642 - 9738.281: 94.3641% ( 173) 00:10:23.521 9738.281 - 9790.920: 95.2989% ( 137) 00:10:23.521 9790.920 - 9843.560: 95.8584% ( 82) 00:10:23.521 9843.560 - 9896.199: 96.4247% ( 83) 00:10:23.521 9896.199 - 9948.839: 96.9091% ( 71) 00:10:23.521 9948.839 - 10001.478: 97.3390% ( 63) 00:10:23.521 10001.478 - 10054.117: 97.5573% ( 32) 00:10:23.521 10054.117 - 10106.757: 97.6801% ( 18) 00:10:23.521 10106.757 - 10159.396: 97.7688% ( 13) 00:10:23.521 10159.396 - 10212.035: 97.8780% ( 16) 00:10:23.521 10212.035 - 10264.675: 97.9872% ( 16) 00:10:23.521 10264.675 - 10317.314: 98.0759% ( 13) 00:10:23.521 10317.314 - 10369.953: 98.1919% ( 17) 00:10:23.521 10369.953 - 10422.593: 98.4648% ( 40) 00:10:23.521 10422.593 - 10475.232: 98.5330% ( 10) 00:10:23.521 10475.232 - 10527.871: 98.5944% ( 9) 00:10:23.521 10527.871 - 10580.511: 98.6285% ( 5) 00:10:23.521 10580.511 - 10633.150: 98.6422% ( 2) 00:10:23.521 10633.150 - 10685.790: 98.6627% ( 3) 00:10:23.521 10685.790 - 10738.429: 98.6763% ( 2) 00:10:23.521 10738.429 - 10791.068: 98.6900% ( 2) 00:10:23.521 11317.462 - 11370.101: 98.6968% ( 1) 00:10:23.521 11528.019 - 11580.659: 98.7036% ( 1) 00:10:23.521 11580.659 - 11633.298: 98.7241% ( 3) 00:10:23.521 11633.298 - 11685.937: 98.7445% ( 3) 00:10:23.521 11685.937 - 11738.577: 98.7787% ( 5) 00:10:23.521 11738.577 - 11791.216: 98.8196% ( 6) 00:10:23.521 11791.216 - 11843.855: 98.8469% ( 4) 00:10:23.521 11843.855 - 11896.495: 98.8878% ( 6) 00:10:23.521 11896.495 - 11949.134: 98.9629% ( 11) 00:10:23.521 11949.134 - 12001.773: 98.9970% ( 5) 00:10:23.521 12001.773 - 12054.413: 99.0379% ( 6) 00:10:23.521 12054.413 - 12107.052: 99.0652% ( 4) 00:10:23.521 12107.052 - 12159.692: 99.0993% ( 5) 00:10:23.521 12159.692 - 12212.331: 99.1266% ( 4) 00:10:23.521 24635.219 - 24740.498: 99.1335% ( 1) 00:10:23.521 25056.334 - 25161.613: 99.1539% ( 3) 00:10:23.521 25161.613 - 25266.892: 99.2222% ( 10) 00:10:23.521 25266.892 - 25372.170: 99.2699% ( 7) 00:10:23.521 25372.170 - 25477.449: 99.3382% ( 10) 00:10:23.521 25477.449 - 25582.728: 99.3859% ( 7) 00:10:23.521 25582.728 - 25688.006: 99.4064% ( 3) 00:10:23.521 25688.006 - 25793.285: 99.4337% ( 4) 00:10:23.521 25793.285 - 25898.564: 99.4541% ( 3) 00:10:23.521 25898.564 - 26003.843: 99.4814% ( 4) 00:10:23.521 26003.843 - 26109.121: 99.5019% ( 3) 00:10:23.521 26109.121 - 26214.400: 99.5360% ( 5) 00:10:23.521 26214.400 - 26319.679: 99.5633% ( 4) 00:10:23.521 30951.942 - 31162.500: 99.5906% ( 4) 00:10:23.521 31162.500 - 31373.057: 99.6588% ( 10) 00:10:23.521 31373.057 - 31583.614: 99.8158% ( 23) 00:10:23.521 31794.172 - 32004.729: 99.8431% ( 4) 00:10:23.521 32004.729 - 32215.287: 99.8977% ( 8) 00:10:23.521 32215.287 - 32425.844: 99.9454% ( 7) 00:10:23.521 32425.844 - 32636.402: 99.9932% ( 7) 00:10:23.521 32636.402 - 32846.959: 100.0000% ( 1) 00:10:23.521 00:10:23.521 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:23.521 ============================================================================== 00:10:23.521 Range in us Cumulative IO count 00:10:23.521 5132.337 - 5158.657: 0.0068% ( 1) 00:10:23.521 5316.575 - 5342.895: 0.0273% ( 3) 00:10:23.521 5342.895 - 5369.214: 0.1160% ( 13) 00:10:23.521 5369.214 - 5395.534: 0.2866% ( 25) 00:10:23.521 5395.534 - 5421.854: 0.3275% ( 6) 00:10:23.521 5421.854 - 5448.173: 0.3684% ( 6) 00:10:23.521 5448.173 - 5474.493: 0.3821% ( 2) 00:10:23.521 5474.493 - 5500.813: 0.3957% ( 2) 00:10:23.521 5500.813 - 5527.133: 0.4094% ( 2) 00:10:23.521 5527.133 - 5553.452: 0.4162% ( 1) 00:10:23.521 5553.452 - 5579.772: 0.4299% ( 2) 00:10:23.521 5579.772 - 5606.092: 0.4367% ( 1) 00:10:23.521 6316.723 - 6343.043: 0.4435% ( 1) 00:10:23.521 6474.641 - 6500.961: 0.4572% ( 2) 00:10:23.521 6500.961 - 6527.280: 0.4708% ( 2) 00:10:23.521 6527.280 - 6553.600: 0.4981% ( 4) 00:10:23.521 6553.600 - 6579.920: 0.5459% ( 7) 00:10:23.521 6579.920 - 6606.239: 0.5800% ( 5) 00:10:23.521 6606.239 - 6632.559: 0.6141% ( 5) 00:10:23.521 6632.559 - 6658.879: 0.7164% ( 15) 00:10:23.521 6658.879 - 6685.198: 0.7505% ( 5) 00:10:23.521 6685.198 - 6711.518: 0.7778% ( 4) 00:10:23.521 6711.518 - 6737.838: 0.8051% ( 4) 00:10:23.521 6737.838 - 6790.477: 0.8529% ( 7) 00:10:23.521 6790.477 - 6843.116: 0.8802% ( 4) 00:10:23.521 6843.116 - 6895.756: 0.9007% ( 3) 00:10:23.521 6895.756 - 6948.395: 0.9211% ( 3) 00:10:23.521 6948.395 - 7001.035: 0.9825% ( 9) 00:10:23.521 7001.035 - 7053.674: 1.1053% ( 18) 00:10:23.521 7053.674 - 7106.313: 1.3783% ( 40) 00:10:23.521 7106.313 - 7158.953: 1.6990% ( 47) 00:10:23.521 7158.953 - 7211.592: 2.2312% ( 78) 00:10:23.521 7211.592 - 7264.231: 2.6610% ( 63) 00:10:23.521 7264.231 - 7316.871: 3.1455% ( 71) 00:10:23.521 7316.871 - 7369.510: 3.6094% ( 68) 00:10:23.521 7369.510 - 7422.149: 4.1962% ( 86) 00:10:23.521 7422.149 - 7474.789: 4.9468% ( 110) 00:10:23.521 7474.789 - 7527.428: 6.1340% ( 174) 00:10:23.521 7527.428 - 7580.067: 7.4372% ( 191) 00:10:23.521 7580.067 - 7632.707: 9.0748% ( 240) 00:10:23.521 7632.707 - 7685.346: 10.6987% ( 238) 00:10:23.521 7685.346 - 7737.986: 12.0906% ( 204) 00:10:23.521 7737.986 - 7790.625: 13.5508% ( 214) 00:10:23.521 7790.625 - 7843.264: 15.3725% ( 267) 00:10:23.521 7843.264 - 7895.904: 17.4331% ( 302) 00:10:23.521 7895.904 - 7948.543: 19.8622% ( 356) 00:10:23.521 7948.543 - 8001.182: 23.0076% ( 461) 00:10:23.521 8001.182 - 8053.822: 26.1190% ( 456) 00:10:23.521 8053.822 - 8106.461: 29.5852% ( 508) 00:10:23.521 8106.461 - 8159.100: 32.8466% ( 478) 00:10:23.521 8159.100 - 8211.740: 35.6509% ( 411) 00:10:23.521 8211.740 - 8264.379: 38.4962% ( 417) 00:10:23.521 8264.379 - 8317.018: 41.0617% ( 376) 00:10:23.521 8317.018 - 8369.658: 43.6613% ( 381) 00:10:23.521 8369.658 - 8422.297: 46.0221% ( 346) 00:10:23.521 8422.297 - 8474.937: 48.8401% ( 413) 00:10:23.521 8474.937 - 8527.576: 51.3783% ( 372) 00:10:23.521 8527.576 - 8580.215: 54.1416% ( 405) 00:10:23.521 8580.215 - 8632.855: 57.4099% ( 479) 00:10:23.521 8632.855 - 8685.494: 60.2006% ( 409) 00:10:23.521 8685.494 - 8738.133: 62.8821% ( 393) 00:10:23.521 8738.133 - 8790.773: 65.4954% ( 383) 00:10:23.521 8790.773 - 8843.412: 68.0131% ( 369) 00:10:23.521 8843.412 - 8896.051: 70.6537% ( 387) 00:10:23.521 8896.051 - 8948.691: 72.9394% ( 335) 00:10:23.521 8948.691 - 9001.330: 74.6452% ( 250) 00:10:23.521 9001.330 - 9053.969: 76.4738% ( 268) 00:10:23.521 9053.969 - 9106.609: 78.0363% ( 229) 00:10:23.521 9106.609 - 9159.248: 80.1583% ( 311) 00:10:23.521 9159.248 - 9211.888: 81.7344% ( 231) 00:10:23.521 9211.888 - 9264.527: 83.0922% ( 199) 00:10:23.521 9264.527 - 9317.166: 84.7980% ( 250) 00:10:23.521 9317.166 - 9369.806: 86.2514% ( 213) 00:10:23.521 9369.806 - 9422.445: 87.4045% ( 169) 00:10:23.521 9422.445 - 9475.084: 88.7759% ( 201) 00:10:23.521 9475.084 - 9527.724: 89.8199% ( 153) 00:10:23.521 9527.724 - 9580.363: 90.9457% ( 165) 00:10:23.521 9580.363 - 9633.002: 92.2967% ( 198) 00:10:23.521 9633.002 - 9685.642: 93.2724% ( 143) 00:10:23.521 9685.642 - 9738.281: 94.1730% ( 132) 00:10:23.521 9738.281 - 9790.920: 95.1078% ( 137) 00:10:23.521 9790.920 - 9843.560: 95.8584% ( 110) 00:10:23.521 9843.560 - 9896.199: 96.4178% ( 82) 00:10:23.521 9896.199 - 9948.839: 96.7590% ( 50) 00:10:23.521 9948.839 - 10001.478: 97.1479% ( 57) 00:10:23.521 10001.478 - 10054.117: 97.3458% ( 29) 00:10:23.521 10054.117 - 10106.757: 97.5027% ( 23) 00:10:23.521 10106.757 - 10159.396: 97.7074% ( 30) 00:10:23.521 10159.396 - 10212.035: 97.8712% ( 24) 00:10:23.521 10212.035 - 10264.675: 98.1578% ( 42) 00:10:23.521 10264.675 - 10317.314: 98.2669% ( 16) 00:10:23.521 10317.314 - 10369.953: 98.3693% ( 15) 00:10:23.521 10369.953 - 10422.593: 98.5262% ( 23) 00:10:23.521 10422.593 - 10475.232: 98.5808% ( 8) 00:10:23.521 10475.232 - 10527.871: 98.6217% ( 6) 00:10:23.521 10527.871 - 10580.511: 98.6422% ( 3) 00:10:23.521 10580.511 - 10633.150: 98.6558% ( 2) 00:10:23.521 10633.150 - 10685.790: 98.6763% ( 3) 00:10:23.521 10685.790 - 10738.429: 98.6900% ( 2) 00:10:23.521 10896.347 - 10948.986: 98.6968% ( 1) 00:10:23.521 11264.822 - 11317.462: 98.7104% ( 2) 00:10:23.521 11317.462 - 11370.101: 98.7377% ( 4) 00:10:23.521 11370.101 - 11422.741: 98.7855% ( 7) 00:10:23.521 11422.741 - 11475.380: 98.8264% ( 6) 00:10:23.521 11475.380 - 11528.019: 99.0379% ( 31) 00:10:23.521 11528.019 - 11580.659: 99.0516% ( 2) 00:10:23.521 11580.659 - 11633.298: 99.0652% ( 2) 00:10:23.521 11633.298 - 11685.937: 99.0857% ( 3) 00:10:23.521 11685.937 - 11738.577: 99.0993% ( 2) 00:10:23.521 11738.577 - 11791.216: 99.1130% ( 2) 00:10:23.521 11791.216 - 11843.855: 99.1266% ( 2) 00:10:23.521 24635.219 - 24740.498: 99.1335% ( 1) 00:10:23.521 24740.498 - 24845.777: 99.1880% ( 8) 00:10:23.521 24845.777 - 24951.055: 99.2631% ( 11) 00:10:23.521 24951.055 - 25056.334: 99.3177% ( 8) 00:10:23.521 25056.334 - 25161.613: 99.3791% ( 9) 00:10:23.521 25161.613 - 25266.892: 99.4064% ( 4) 00:10:23.521 25266.892 - 25372.170: 99.4337% ( 4) 00:10:23.521 25372.170 - 25477.449: 99.4541% ( 3) 00:10:23.521 25477.449 - 25582.728: 99.4814% ( 4) 00:10:23.521 25582.728 - 25688.006: 99.5087% ( 4) 00:10:23.521 25688.006 - 25793.285: 99.5360% ( 4) 00:10:23.521 25793.285 - 25898.564: 99.5633% ( 4) 00:10:23.521 30320.270 - 30530.827: 99.6384% ( 11) 00:10:23.521 30530.827 - 30741.385: 99.7544% ( 17) 00:10:23.521 30741.385 - 30951.942: 99.7817% ( 4) 00:10:23.521 30951.942 - 31162.500: 99.8021% ( 3) 00:10:23.521 31162.500 - 31373.057: 99.8567% ( 8) 00:10:23.521 31373.057 - 31583.614: 99.9113% ( 8) 00:10:23.521 31583.614 - 31794.172: 99.9659% ( 8) 00:10:23.521 31794.172 - 32004.729: 100.0000% ( 5) 00:10:23.521 00:10:23.521 01:20:08 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:23.521 00:10:23.521 real 0m2.586s 00:10:23.521 user 0m2.222s 00:10:23.521 sys 0m0.270s 00:10:23.521 01:20:08 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.521 01:20:08 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.521 ************************************ 00:10:23.521 END TEST nvme_perf 00:10:23.521 ************************************ 00:10:23.521 01:20:08 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.521 01:20:08 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:23.521 01:20:08 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.521 01:20:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.521 ************************************ 00:10:23.521 START TEST nvme_hello_world 00:10:23.521 ************************************ 00:10:23.521 01:20:08 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:23.779 Initializing NVMe Controllers 00:10:23.779 Attached to 0000:00:10.0 00:10:23.779 Namespace ID: 1 size: 6GB 00:10:23.779 Attached to 0000:00:11.0 00:10:23.779 Namespace ID: 1 size: 5GB 00:10:23.779 Attached to 0000:00:13.0 00:10:23.779 Namespace ID: 1 size: 1GB 00:10:23.779 Attached to 0000:00:12.0 00:10:23.779 Namespace ID: 1 size: 4GB 00:10:23.779 Namespace ID: 2 size: 4GB 00:10:23.779 Namespace ID: 3 size: 4GB 00:10:23.779 Initialization complete. 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 INFO: using host memory buffer for IO 00:10:23.779 Hello world! 00:10:23.779 00:10:23.779 real 0m0.285s 00:10:23.779 user 0m0.097s 00:10:23.779 sys 0m0.138s 00:10:23.779 01:20:08 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.779 01:20:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:23.779 ************************************ 00:10:23.779 END TEST nvme_hello_world 00:10:23.779 ************************************ 00:10:23.779 01:20:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:23.779 01:20:09 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:23.779 01:20:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.779 01:20:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.779 ************************************ 00:10:23.779 START TEST nvme_sgl 00:10:23.779 ************************************ 00:10:23.779 01:20:09 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:24.038 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:24.038 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:24.038 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:24.038 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:24.038 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:24.038 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:24.038 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:24.038 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:24.038 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:24.296 NVMe Readv/Writev Request test 00:10:24.296 Attached to 0000:00:10.0 00:10:24.296 Attached to 0000:00:11.0 00:10:24.296 Attached to 0000:00:13.0 00:10:24.296 Attached to 0000:00:12.0 00:10:24.296 0000:00:10.0: build_io_request_2 test passed 00:10:24.296 0000:00:10.0: build_io_request_4 test passed 00:10:24.296 0000:00:10.0: build_io_request_5 test passed 00:10:24.296 0000:00:10.0: build_io_request_6 test passed 00:10:24.296 0000:00:10.0: build_io_request_7 test passed 00:10:24.296 0000:00:10.0: build_io_request_10 test passed 00:10:24.296 0000:00:11.0: build_io_request_2 test passed 00:10:24.296 0000:00:11.0: build_io_request_4 test passed 00:10:24.296 0000:00:11.0: build_io_request_5 test passed 00:10:24.296 0000:00:11.0: build_io_request_6 test passed 00:10:24.296 0000:00:11.0: build_io_request_7 test passed 00:10:24.296 0000:00:11.0: build_io_request_10 test passed 00:10:24.296 Cleaning up... 00:10:24.296 00:10:24.296 real 0m0.322s 00:10:24.296 user 0m0.132s 00:10:24.296 sys 0m0.139s 00:10:24.296 01:20:09 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.296 ************************************ 00:10:24.296 END TEST nvme_sgl 00:10:24.296 ************************************ 00:10:24.296 01:20:09 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:24.296 01:20:09 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.296 01:20:09 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:24.296 01:20:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.296 01:20:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.296 ************************************ 00:10:24.296 START TEST nvme_e2edp 00:10:24.296 ************************************ 00:10:24.296 01:20:09 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:24.556 NVMe Write/Read with End-to-End data protection test 00:10:24.556 Attached to 0000:00:10.0 00:10:24.556 Attached to 0000:00:11.0 00:10:24.556 Attached to 0000:00:13.0 00:10:24.556 Attached to 0000:00:12.0 00:10:24.556 Cleaning up... 00:10:24.556 00:10:24.556 real 0m0.282s 00:10:24.556 user 0m0.089s 00:10:24.556 sys 0m0.144s 00:10:24.556 ************************************ 00:10:24.556 END TEST nvme_e2edp 00:10:24.556 ************************************ 00:10:24.556 01:20:09 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.556 01:20:09 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:24.556 01:20:09 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:24.556 01:20:09 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:24.556 01:20:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.556 01:20:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.556 ************************************ 00:10:24.556 START TEST nvme_reserve 00:10:24.556 ************************************ 00:10:24.556 01:20:09 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:24.815 ===================================================== 00:10:24.815 NVMe Controller at PCI bus 0, device 16, function 0 00:10:24.815 ===================================================== 00:10:24.815 Reservations: Not Supported 00:10:24.815 ===================================================== 00:10:24.815 NVMe Controller at PCI bus 0, device 17, function 0 00:10:24.815 ===================================================== 00:10:24.815 Reservations: Not Supported 00:10:24.815 ===================================================== 00:10:24.815 NVMe Controller at PCI bus 0, device 19, function 0 00:10:24.815 ===================================================== 00:10:24.815 Reservations: Not Supported 00:10:24.815 ===================================================== 00:10:24.815 NVMe Controller at PCI bus 0, device 18, function 0 00:10:24.815 ===================================================== 00:10:24.815 Reservations: Not Supported 00:10:24.815 Reservation test passed 00:10:24.815 00:10:24.815 real 0m0.275s 00:10:24.815 user 0m0.101s 00:10:24.815 sys 0m0.127s 00:10:24.815 01:20:10 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.815 ************************************ 00:10:24.815 END TEST nvme_reserve 00:10:24.815 ************************************ 00:10:24.815 01:20:10 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:25.074 01:20:10 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:25.074 01:20:10 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:25.074 01:20:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.074 01:20:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.074 ************************************ 00:10:25.074 START TEST nvme_err_injection 00:10:25.074 ************************************ 00:10:25.074 01:20:10 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:25.334 NVMe Error Injection test 00:10:25.334 Attached to 0000:00:10.0 00:10:25.334 Attached to 0000:00:11.0 00:10:25.334 Attached to 0000:00:13.0 00:10:25.334 Attached to 0000:00:12.0 00:10:25.334 0000:00:10.0: get features failed as expected 00:10:25.334 0000:00:11.0: get features failed as expected 00:10:25.334 0000:00:13.0: get features failed as expected 00:10:25.334 0000:00:12.0: get features failed as expected 00:10:25.334 0000:00:11.0: get features successfully as expected 00:10:25.334 0000:00:13.0: get features successfully as expected 00:10:25.334 0000:00:12.0: get features successfully as expected 00:10:25.334 0000:00:10.0: get features successfully as expected 00:10:25.334 0000:00:11.0: read failed as expected 00:10:25.334 0000:00:13.0: read failed as expected 00:10:25.334 0000:00:12.0: read failed as expected 00:10:25.334 0000:00:10.0: read failed as expected 00:10:25.334 0000:00:10.0: read successfully as expected 00:10:25.334 0000:00:11.0: read successfully as expected 00:10:25.334 0000:00:13.0: read successfully as expected 00:10:25.334 0000:00:12.0: read successfully as expected 00:10:25.334 Cleaning up... 00:10:25.334 00:10:25.334 real 0m0.279s 00:10:25.334 user 0m0.092s 00:10:25.334 sys 0m0.142s 00:10:25.334 01:20:10 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.334 ************************************ 00:10:25.334 01:20:10 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:25.334 END TEST nvme_err_injection 00:10:25.334 ************************************ 00:10:25.334 01:20:10 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:25.334 01:20:10 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:10:25.334 01:20:10 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.334 01:20:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.334 ************************************ 00:10:25.334 START TEST nvme_overhead 00:10:25.334 ************************************ 00:10:25.334 01:20:10 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:26.711 Initializing NVMe Controllers 00:10:26.711 Attached to 0000:00:10.0 00:10:26.711 Attached to 0000:00:11.0 00:10:26.711 Attached to 0000:00:13.0 00:10:26.711 Attached to 0000:00:12.0 00:10:26.711 Initialization complete. Launching workers. 00:10:26.711 submit (in ns) avg, min, max = 13847.3, 12380.7, 107262.7 00:10:26.711 complete (in ns) avg, min, max = 8997.1, 8212.9, 83615.3 00:10:26.711 00:10:26.711 Submit histogram 00:10:26.711 ================ 00:10:26.711 Range in us Cumulative Count 00:10:26.711 12.337 - 12.389: 0.0182% ( 1) 00:10:26.711 12.543 - 12.594: 0.0364% ( 1) 00:10:26.711 12.646 - 12.697: 0.0911% ( 3) 00:10:26.711 12.697 - 12.749: 0.1639% ( 4) 00:10:26.711 12.749 - 12.800: 0.4736% ( 17) 00:10:26.711 12.800 - 12.851: 0.9472% ( 26) 00:10:26.711 12.851 - 12.903: 1.5118% ( 31) 00:10:26.711 12.903 - 12.954: 2.4044% ( 49) 00:10:26.711 12.954 - 13.006: 4.0437% ( 90) 00:10:26.711 13.006 - 13.057: 5.8470% ( 99) 00:10:26.711 13.057 - 13.108: 9.1257% ( 180) 00:10:26.711 13.108 - 13.160: 13.2969% ( 229) 00:10:26.711 13.160 - 13.263: 23.9526% ( 585) 00:10:26.711 13.263 - 13.365: 37.1949% ( 727) 00:10:26.711 13.365 - 13.468: 50.9107% ( 753) 00:10:26.711 13.468 - 13.571: 63.1330% ( 671) 00:10:26.711 13.571 - 13.674: 72.9690% ( 540) 00:10:26.711 13.674 - 13.777: 80.4554% ( 411) 00:10:26.711 13.777 - 13.880: 86.1202% ( 311) 00:10:26.711 13.880 - 13.982: 89.6357% ( 193) 00:10:26.711 13.982 - 14.085: 92.1858% ( 140) 00:10:26.711 14.085 - 14.188: 93.4791% ( 71) 00:10:26.711 14.188 - 14.291: 94.2077% ( 40) 00:10:26.711 14.291 - 14.394: 94.5719% ( 20) 00:10:26.711 14.394 - 14.496: 94.6448% ( 4) 00:10:26.711 14.496 - 14.599: 94.7177% ( 4) 00:10:26.711 14.702 - 14.805: 94.7541% ( 2) 00:10:26.711 15.010 - 15.113: 94.7723% ( 1) 00:10:26.711 15.524 - 15.627: 94.7905% ( 1) 00:10:26.711 15.936 - 16.039: 94.8087% ( 1) 00:10:26.711 16.141 - 16.244: 94.8270% ( 1) 00:10:26.711 16.244 - 16.347: 94.8452% ( 1) 00:10:26.711 16.655 - 16.758: 94.8816% ( 2) 00:10:26.711 16.758 - 16.861: 94.8998% ( 1) 00:10:26.711 17.169 - 17.272: 94.9362% ( 2) 00:10:26.711 17.272 - 17.375: 94.9727% ( 2) 00:10:26.711 17.375 - 17.478: 95.0273% ( 3) 00:10:26.711 17.478 - 17.581: 95.1548% ( 7) 00:10:26.711 17.581 - 17.684: 95.3734% ( 12) 00:10:26.711 17.684 - 17.786: 95.4827% ( 6) 00:10:26.711 17.786 - 17.889: 95.8106% ( 18) 00:10:26.711 17.889 - 17.992: 95.9563% ( 8) 00:10:26.711 17.992 - 18.095: 96.1749% ( 12) 00:10:26.711 18.095 - 18.198: 96.4117% ( 13) 00:10:26.711 18.198 - 18.300: 96.7213% ( 17) 00:10:26.711 18.300 - 18.403: 96.9217% ( 11) 00:10:26.711 18.403 - 18.506: 97.1585% ( 13) 00:10:26.711 18.506 - 18.609: 97.2313% ( 4) 00:10:26.711 18.609 - 18.712: 97.3588% ( 7) 00:10:26.711 18.712 - 18.814: 97.5046% ( 8) 00:10:26.711 18.814 - 18.917: 97.6321% ( 7) 00:10:26.711 18.917 - 19.020: 97.7049% ( 4) 00:10:26.711 19.020 - 19.123: 97.8142% ( 6) 00:10:26.711 19.123 - 19.226: 97.9235% ( 6) 00:10:26.711 19.226 - 19.329: 98.0692% ( 8) 00:10:26.711 19.329 - 19.431: 98.2149% ( 8) 00:10:26.711 19.431 - 19.534: 98.3242% ( 6) 00:10:26.711 19.534 - 19.637: 98.4153% ( 5) 00:10:26.711 19.637 - 19.740: 98.5792% ( 9) 00:10:26.711 19.740 - 19.843: 98.6703% ( 5) 00:10:26.711 19.843 - 19.945: 98.7250% ( 3) 00:10:26.711 19.945 - 20.048: 98.7432% ( 1) 00:10:26.711 20.151 - 20.254: 98.7978% ( 3) 00:10:26.711 20.459 - 20.562: 98.8160% ( 1) 00:10:26.711 20.562 - 20.665: 98.8342% ( 1) 00:10:26.711 20.665 - 20.768: 98.9071% ( 4) 00:10:26.711 20.768 - 20.871: 98.9617% ( 3) 00:10:26.711 20.871 - 20.973: 98.9800% ( 1) 00:10:26.711 21.076 - 21.179: 99.0164% ( 2) 00:10:26.711 21.179 - 21.282: 99.0528% ( 2) 00:10:26.711 21.282 - 21.385: 99.0710% ( 1) 00:10:26.711 21.385 - 21.488: 99.1621% ( 5) 00:10:26.711 21.488 - 21.590: 99.1803% ( 1) 00:10:26.711 21.693 - 21.796: 99.2350% ( 3) 00:10:26.711 21.796 - 21.899: 99.2532% ( 1) 00:10:26.711 21.899 - 22.002: 99.2714% ( 1) 00:10:26.711 22.002 - 22.104: 99.2896% ( 1) 00:10:26.711 22.516 - 22.618: 99.3078% ( 1) 00:10:26.711 22.618 - 22.721: 99.3260% ( 1) 00:10:26.711 22.721 - 22.824: 99.3443% ( 1) 00:10:26.711 23.338 - 23.441: 99.3625% ( 1) 00:10:26.711 23.441 - 23.544: 99.3989% ( 2) 00:10:26.711 23.647 - 23.749: 99.4353% ( 2) 00:10:26.711 23.749 - 23.852: 99.4718% ( 2) 00:10:26.711 23.852 - 23.955: 99.5082% ( 2) 00:10:26.711 23.955 - 24.058: 99.5446% ( 2) 00:10:26.711 24.058 - 24.161: 99.5628% ( 1) 00:10:26.711 24.161 - 24.263: 99.5993% ( 2) 00:10:26.711 24.675 - 24.778: 99.6175% ( 1) 00:10:26.711 24.983 - 25.086: 99.6357% ( 1) 00:10:26.711 25.394 - 25.497: 99.6539% ( 1) 00:10:26.711 28.376 - 28.582: 99.6721% ( 1) 00:10:26.711 28.582 - 28.787: 99.6903% ( 1) 00:10:26.711 29.198 - 29.404: 99.7086% ( 1) 00:10:26.711 29.404 - 29.610: 99.7268% ( 1) 00:10:26.711 29.815 - 30.021: 99.7450% ( 1) 00:10:26.711 30.021 - 30.227: 99.7632% ( 1) 00:10:26.711 31.255 - 31.460: 99.7814% ( 1) 00:10:26.711 31.460 - 31.666: 99.7996% ( 1) 00:10:26.711 32.488 - 32.694: 99.8361% ( 2) 00:10:26.711 33.928 - 34.133: 99.8543% ( 1) 00:10:26.711 39.068 - 39.274: 99.8725% ( 1) 00:10:26.711 41.741 - 41.947: 99.8907% ( 1) 00:10:26.711 42.153 - 42.358: 99.9089% ( 1) 00:10:26.711 46.059 - 46.265: 99.9271% ( 1) 00:10:26.711 46.882 - 47.088: 99.9454% ( 1) 00:10:26.711 67.444 - 67.855: 99.9636% ( 1) 00:10:26.711 97.465 - 97.876: 99.9818% ( 1) 00:10:26.711 106.924 - 107.746: 100.0000% ( 1) 00:10:26.711 00:10:26.711 Complete histogram 00:10:26.711 ================== 00:10:26.711 Range in us Cumulative Count 00:10:26.711 8.173 - 8.225: 0.0182% ( 1) 00:10:26.711 8.276 - 8.328: 0.0729% ( 3) 00:10:26.711 8.328 - 8.379: 0.1821% ( 6) 00:10:26.711 8.379 - 8.431: 0.5647% ( 21) 00:10:26.711 8.431 - 8.482: 1.5847% ( 56) 00:10:26.711 8.482 - 8.533: 2.7140% ( 62) 00:10:26.711 8.533 - 8.585: 5.0273% ( 127) 00:10:26.712 8.585 - 8.636: 15.2641% ( 562) 00:10:26.712 8.636 - 8.688: 28.5428% ( 729) 00:10:26.712 8.688 - 8.739: 37.6503% ( 500) 00:10:26.712 8.739 - 8.790: 53.9344% ( 894) 00:10:26.712 8.790 - 8.842: 68.0328% ( 774) 00:10:26.712 8.842 - 8.893: 76.3752% ( 458) 00:10:26.712 8.893 - 8.945: 81.9672% ( 307) 00:10:26.712 8.945 - 8.996: 86.6485% ( 257) 00:10:26.712 8.996 - 9.047: 90.4007% ( 206) 00:10:26.712 9.047 - 9.099: 92.9326% ( 139) 00:10:26.712 9.099 - 9.150: 94.6630% ( 95) 00:10:26.712 9.150 - 9.202: 95.7559% ( 60) 00:10:26.712 9.202 - 9.253: 96.3570% ( 33) 00:10:26.712 9.253 - 9.304: 96.7031% ( 19) 00:10:26.712 9.304 - 9.356: 96.9581% ( 14) 00:10:26.712 9.356 - 9.407: 97.0128% ( 3) 00:10:26.712 9.407 - 9.459: 97.1403% ( 7) 00:10:26.712 9.459 - 9.510: 97.2495% ( 6) 00:10:26.712 9.510 - 9.561: 97.2678% ( 1) 00:10:26.712 9.561 - 9.613: 97.3042% ( 2) 00:10:26.712 9.664 - 9.716: 97.3588% ( 3) 00:10:26.712 9.870 - 9.921: 97.3770% ( 1) 00:10:26.712 10.024 - 10.076: 97.4135% ( 2) 00:10:26.712 10.435 - 10.487: 97.4317% ( 1) 00:10:26.712 10.692 - 10.744: 97.4499% ( 1) 00:10:26.712 10.949 - 11.001: 97.4681% ( 1) 00:10:26.712 11.104 - 11.155: 97.4863% ( 1) 00:10:26.712 11.155 - 11.206: 97.5046% ( 1) 00:10:26.712 11.309 - 11.361: 97.5228% ( 1) 00:10:26.712 11.823 - 11.875: 97.5410% ( 1) 00:10:26.712 12.183 - 12.235: 97.5774% ( 2) 00:10:26.712 12.235 - 12.286: 97.5956% ( 1) 00:10:26.712 12.286 - 12.337: 97.6138% ( 1) 00:10:26.712 12.440 - 12.492: 97.6321% ( 1) 00:10:26.712 12.749 - 12.800: 97.6503% ( 1) 00:10:26.712 13.468 - 13.571: 97.6685% ( 1) 00:10:26.712 13.674 - 13.777: 97.6867% ( 1) 00:10:26.712 13.777 - 13.880: 97.7231% ( 2) 00:10:26.712 13.880 - 13.982: 97.7413% ( 1) 00:10:26.712 13.982 - 14.085: 97.8324% ( 5) 00:10:26.712 14.085 - 14.188: 97.9053% ( 4) 00:10:26.712 14.188 - 14.291: 97.9781% ( 4) 00:10:26.712 14.291 - 14.394: 98.0328% ( 3) 00:10:26.712 14.394 - 14.496: 98.1421% ( 6) 00:10:26.712 14.496 - 14.599: 98.3242% ( 10) 00:10:26.712 14.599 - 14.702: 98.4699% ( 8) 00:10:26.712 14.702 - 14.805: 98.6703% ( 11) 00:10:26.712 14.805 - 14.908: 98.7796% ( 6) 00:10:26.712 14.908 - 15.010: 98.7978% ( 1) 00:10:26.712 15.010 - 15.113: 98.8707% ( 4) 00:10:26.712 15.113 - 15.216: 98.9253% ( 3) 00:10:26.712 15.216 - 15.319: 98.9617% ( 2) 00:10:26.712 15.319 - 15.422: 99.0164% ( 3) 00:10:26.712 15.422 - 15.524: 99.0346% ( 1) 00:10:26.712 15.524 - 15.627: 99.0528% ( 1) 00:10:26.712 15.730 - 15.833: 99.0710% ( 1) 00:10:26.712 16.244 - 16.347: 99.1257% ( 3) 00:10:26.712 16.450 - 16.553: 99.1439% ( 1) 00:10:26.712 16.655 - 16.758: 99.1621% ( 1) 00:10:26.712 16.758 - 16.861: 99.1803% ( 1) 00:10:26.712 16.861 - 16.964: 99.1985% ( 1) 00:10:26.712 17.375 - 17.478: 99.2168% ( 1) 00:10:26.712 18.609 - 18.712: 99.2350% ( 1) 00:10:26.712 18.814 - 18.917: 99.2714% ( 2) 00:10:26.712 18.917 - 19.020: 99.2896% ( 1) 00:10:26.712 19.020 - 19.123: 99.3078% ( 1) 00:10:26.712 19.123 - 19.226: 99.5082% ( 11) 00:10:26.712 19.226 - 19.329: 99.6721% ( 9) 00:10:26.712 19.329 - 19.431: 99.6903% ( 1) 00:10:26.712 19.534 - 19.637: 99.7268% ( 2) 00:10:26.712 19.740 - 19.843: 99.7450% ( 1) 00:10:26.712 20.254 - 20.357: 99.7632% ( 1) 00:10:26.712 20.357 - 20.459: 99.7814% ( 1) 00:10:26.712 20.459 - 20.562: 99.7996% ( 1) 00:10:26.712 21.076 - 21.179: 99.8179% ( 1) 00:10:26.712 24.469 - 24.572: 99.8361% ( 1) 00:10:26.712 24.572 - 24.675: 99.8543% ( 1) 00:10:26.712 24.675 - 24.778: 99.8907% ( 2) 00:10:26.712 24.880 - 24.983: 99.9089% ( 1) 00:10:26.712 24.983 - 25.086: 99.9271% ( 1) 00:10:26.712 25.189 - 25.292: 99.9454% ( 1) 00:10:26.712 28.787 - 28.993: 99.9636% ( 1) 00:10:26.712 34.133 - 34.339: 99.9818% ( 1) 00:10:26.712 83.483 - 83.894: 100.0000% ( 1) 00:10:26.712 00:10:26.712 00:10:26.712 real 0m1.252s 00:10:26.712 user 0m1.064s 00:10:26.712 sys 0m0.144s 00:10:26.712 01:20:11 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.712 01:20:11 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:26.712 ************************************ 00:10:26.712 END TEST nvme_overhead 00:10:26.712 ************************************ 00:10:26.712 01:20:11 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:26.712 01:20:11 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:26.712 01:20:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.712 01:20:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:26.712 ************************************ 00:10:26.712 START TEST nvme_arbitration 00:10:26.712 ************************************ 00:10:26.712 01:20:11 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:29.998 Initializing NVMe Controllers 00:10:29.998 Attached to 0000:00:10.0 00:10:29.998 Attached to 0000:00:11.0 00:10:29.998 Attached to 0000:00:13.0 00:10:29.998 Attached to 0000:00:12.0 00:10:29.998 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:29.998 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:29.998 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:29.998 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:29.998 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:29.998 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:29.998 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:29.998 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:29.998 Initialization complete. Launching workers. 00:10:29.998 Starting thread on core 1 with urgent priority queue 00:10:29.998 Starting thread on core 2 with urgent priority queue 00:10:29.998 Starting thread on core 3 with urgent priority queue 00:10:29.998 Starting thread on core 0 with urgent priority queue 00:10:29.998 QEMU NVMe Ctrl (12340 ) core 0: 3797.33 IO/s 26.33 secs/100000 ios 00:10:29.998 QEMU NVMe Ctrl (12342 ) core 0: 3797.33 IO/s 26.33 secs/100000 ios 00:10:29.998 QEMU NVMe Ctrl (12341 ) core 1: 3904.00 IO/s 25.61 secs/100000 ios 00:10:29.998 QEMU NVMe Ctrl (12342 ) core 1: 3904.00 IO/s 25.61 secs/100000 ios 00:10:29.998 QEMU NVMe Ctrl (12343 ) core 2: 3797.33 IO/s 26.33 secs/100000 ios 00:10:29.998 QEMU NVMe Ctrl (12342 ) core 3: 3605.33 IO/s 27.74 secs/100000 ios 00:10:29.998 ======================================================== 00:10:29.998 00:10:29.998 00:10:29.998 real 0m3.302s 00:10:29.998 user 0m9.072s 00:10:29.998 sys 0m0.159s 00:10:29.998 ************************************ 00:10:29.998 END TEST nvme_arbitration 00:10:29.998 ************************************ 00:10:29.998 01:20:15 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:29.998 01:20:15 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 01:20:15 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:29.998 01:20:15 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:29.998 01:20:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.998 01:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 ************************************ 00:10:29.998 START TEST nvme_single_aen 00:10:29.998 ************************************ 00:10:29.998 01:20:15 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:30.258 Asynchronous Event Request test 00:10:30.258 Attached to 0000:00:10.0 00:10:30.258 Attached to 0000:00:11.0 00:10:30.258 Attached to 0000:00:13.0 00:10:30.258 Attached to 0000:00:12.0 00:10:30.258 Reset controller to setup AER completions for this process 00:10:30.258 Registering asynchronous event callbacks... 00:10:30.258 Getting orig temperature thresholds of all controllers 00:10:30.258 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.258 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.258 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.258 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:30.258 Setting all controllers temperature threshold low to trigger AER 00:10:30.258 Waiting for all controllers temperature threshold to be set lower 00:10:30.258 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.258 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:30.258 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.258 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:30.258 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.258 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:30.258 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:30.258 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:30.258 Waiting for all controllers to trigger AER and reset threshold 00:10:30.258 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.258 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.258 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.258 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:30.258 Cleaning up... 00:10:30.258 ************************************ 00:10:30.258 END TEST nvme_single_aen 00:10:30.258 ************************************ 00:10:30.258 00:10:30.258 real 0m0.304s 00:10:30.258 user 0m0.101s 00:10:30.258 sys 0m0.139s 00:10:30.258 01:20:15 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.258 01:20:15 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:30.258 01:20:15 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:30.258 01:20:15 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:30.258 01:20:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.258 01:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.258 ************************************ 00:10:30.258 START TEST nvme_doorbell_aers 00:10:30.258 ************************************ 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:10:30.258 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:30.517 01:20:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:30.775 [2024-07-21 01:20:15.979845] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:10:40.748 Executing: test_write_invalid_db 00:10:40.748 Waiting for AER completion... 00:10:40.748 Failure: test_write_invalid_db 00:10:40.748 00:10:40.748 Executing: test_invalid_db_write_overflow_sq 00:10:40.748 Waiting for AER completion... 00:10:40.748 Failure: test_invalid_db_write_overflow_sq 00:10:40.748 00:10:40.748 Executing: test_invalid_db_write_overflow_cq 00:10:40.748 Waiting for AER completion... 00:10:40.748 Failure: test_invalid_db_write_overflow_cq 00:10:40.748 00:10:40.748 01:20:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:40.748 01:20:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:40.749 [2024-07-21 01:20:26.007793] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:10:50.720 Executing: test_write_invalid_db 00:10:50.720 Waiting for AER completion... 00:10:50.720 Failure: test_write_invalid_db 00:10:50.720 00:10:50.720 Executing: test_invalid_db_write_overflow_sq 00:10:50.720 Waiting for AER completion... 00:10:50.720 Failure: test_invalid_db_write_overflow_sq 00:10:50.720 00:10:50.720 Executing: test_invalid_db_write_overflow_cq 00:10:50.720 Waiting for AER completion... 00:10:50.720 Failure: test_invalid_db_write_overflow_cq 00:10:50.720 00:10:50.720 01:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:50.720 01:20:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:50.979 [2024-07-21 01:20:36.052129] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:01.005 Executing: test_write_invalid_db 00:11:01.005 Waiting for AER completion... 00:11:01.005 Failure: test_write_invalid_db 00:11:01.005 00:11:01.005 Executing: test_invalid_db_write_overflow_sq 00:11:01.005 Waiting for AER completion... 00:11:01.005 Failure: test_invalid_db_write_overflow_sq 00:11:01.005 00:11:01.005 Executing: test_invalid_db_write_overflow_cq 00:11:01.005 Waiting for AER completion... 00:11:01.005 Failure: test_invalid_db_write_overflow_cq 00:11:01.005 00:11:01.005 01:20:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:01.005 01:20:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:01.005 [2024-07-21 01:20:46.103184] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 Executing: test_write_invalid_db 00:11:10.977 Waiting for AER completion... 00:11:10.977 Failure: test_write_invalid_db 00:11:10.977 00:11:10.977 Executing: test_invalid_db_write_overflow_sq 00:11:10.977 Waiting for AER completion... 00:11:10.977 Failure: test_invalid_db_write_overflow_sq 00:11:10.977 00:11:10.977 Executing: test_invalid_db_write_overflow_cq 00:11:10.977 Waiting for AER completion... 00:11:10.977 Failure: test_invalid_db_write_overflow_cq 00:11:10.977 00:11:10.977 00:11:10.977 real 0m40.325s 00:11:10.977 user 0m29.771s 00:11:10.977 sys 0m10.190s 00:11:10.977 ************************************ 00:11:10.977 END TEST nvme_doorbell_aers 00:11:10.977 ************************************ 00:11:10.977 01:20:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:10.977 01:20:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:10.977 01:20:55 nvme -- nvme/nvme.sh@97 -- # uname 00:11:10.977 01:20:55 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:10.977 01:20:55 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:10.977 01:20:55 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:10.977 01:20:55 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.977 01:20:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:10.977 ************************************ 00:11:10.977 START TEST nvme_multi_aen 00:11:10.977 ************************************ 00:11:10.977 01:20:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:10.977 [2024-07-21 01:20:56.173084] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.173178] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.173195] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.174778] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.174811] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.174836] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.176392] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.176558] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.176670] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.178056] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.178225] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 [2024-07-21 01:20:56.178324] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80991) is not found. Dropping the request. 00:11:10.977 Child process pid: 81512 00:11:11.237 [Child] Asynchronous Event Request test 00:11:11.237 [Child] Attached to 0000:00:10.0 00:11:11.237 [Child] Attached to 0000:00:11.0 00:11:11.237 [Child] Attached to 0000:00:13.0 00:11:11.237 [Child] Attached to 0000:00:12.0 00:11:11.237 [Child] Registering asynchronous event callbacks... 00:11:11.237 [Child] Getting orig temperature thresholds of all controllers 00:11:11.237 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:11.237 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 [Child] Cleaning up... 00:11:11.237 Asynchronous Event Request test 00:11:11.237 Attached to 0000:00:10.0 00:11:11.237 Attached to 0000:00:11.0 00:11:11.237 Attached to 0000:00:13.0 00:11:11.237 Attached to 0000:00:12.0 00:11:11.237 Reset controller to setup AER completions for this process 00:11:11.237 Registering asynchronous event callbacks... 00:11:11.237 Getting orig temperature thresholds of all controllers 00:11:11.237 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:11.237 Setting all controllers temperature threshold low to trigger AER 00:11:11.237 Waiting for all controllers temperature threshold to be set lower 00:11:11.237 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:11.237 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:11.237 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:11.237 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:11.237 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:11.237 Waiting for all controllers to trigger AER and reset threshold 00:11:11.237 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:11.237 Cleaning up... 00:11:11.237 00:11:11.237 real 0m0.533s 00:11:11.237 user 0m0.177s 00:11:11.237 sys 0m0.258s 00:11:11.237 01:20:56 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.237 01:20:56 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:11.237 ************************************ 00:11:11.237 END TEST nvme_multi_aen 00:11:11.237 ************************************ 00:11:11.496 01:20:56 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:11.496 01:20:56 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:11.496 01:20:56 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.496 01:20:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.496 ************************************ 00:11:11.496 START TEST nvme_startup 00:11:11.496 ************************************ 00:11:11.496 01:20:56 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:11.755 Initializing NVMe Controllers 00:11:11.755 Attached to 0000:00:10.0 00:11:11.755 Attached to 0000:00:11.0 00:11:11.755 Attached to 0000:00:13.0 00:11:11.755 Attached to 0000:00:12.0 00:11:11.755 Initialization complete. 00:11:11.755 Time used:180362.016 (us). 00:11:11.755 ************************************ 00:11:11.755 END TEST nvme_startup 00:11:11.755 ************************************ 00:11:11.755 00:11:11.755 real 0m0.268s 00:11:11.755 user 0m0.090s 00:11:11.755 sys 0m0.129s 00:11:11.755 01:20:56 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.755 01:20:56 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:11.755 01:20:56 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:11.755 01:20:56 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:11.755 01:20:56 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.755 01:20:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.755 ************************************ 00:11:11.755 START TEST nvme_multi_secondary 00:11:11.755 ************************************ 00:11:11.755 01:20:56 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:11:11.755 01:20:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81563 00:11:11.756 01:20:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:11.756 01:20:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81564 00:11:11.756 01:20:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:11.756 01:20:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:15.056 Initializing NVMe Controllers 00:11:15.056 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:15.056 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:15.056 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:15.056 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:15.056 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:15.056 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:15.056 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:15.056 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:15.056 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:15.056 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:15.056 Initialization complete. Launching workers. 00:11:15.056 ======================================================== 00:11:15.056 Latency(us) 00:11:15.056 Device Information : IOPS MiB/s Average min max 00:11:15.056 PCIE (0000:00:10.0) NSID 1 from core 1: 5105.88 19.94 3131.12 1440.59 8265.52 00:11:15.056 PCIE (0000:00:11.0) NSID 1 from core 1: 5105.88 19.94 3133.20 1451.26 7115.79 00:11:15.056 PCIE (0000:00:13.0) NSID 1 from core 1: 5105.88 19.94 3133.23 1447.54 7655.97 00:11:15.056 PCIE (0000:00:12.0) NSID 1 from core 1: 5105.88 19.94 3133.20 1414.87 7440.15 00:11:15.056 PCIE (0000:00:12.0) NSID 2 from core 1: 5105.88 19.94 3133.19 1397.99 8078.55 00:11:15.056 PCIE (0000:00:12.0) NSID 3 from core 1: 5105.88 19.94 3133.63 1533.26 8485.46 00:11:15.056 ======================================================== 00:11:15.056 Total : 30635.29 119.67 3132.93 1397.99 8485.46 00:11:15.056 00:11:15.314 Initializing NVMe Controllers 00:11:15.314 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:15.314 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:15.314 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:15.314 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:15.314 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:15.314 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:15.314 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:15.314 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:15.314 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:15.314 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:15.314 Initialization complete. Launching workers. 00:11:15.314 ======================================================== 00:11:15.314 Latency(us) 00:11:15.314 Device Information : IOPS MiB/s Average min max 00:11:15.314 PCIE (0000:00:10.0) NSID 1 from core 2: 3055.13 11.93 5233.94 1440.68 12133.05 00:11:15.314 PCIE (0000:00:11.0) NSID 1 from core 2: 3055.13 11.93 5236.62 1495.02 14008.06 00:11:15.314 PCIE (0000:00:13.0) NSID 1 from core 2: 3055.13 11.93 5236.53 1382.11 12980.06 00:11:15.314 PCIE (0000:00:12.0) NSID 1 from core 2: 3055.13 11.93 5236.34 1333.41 16757.86 00:11:15.314 PCIE (0000:00:12.0) NSID 2 from core 2: 3055.13 11.93 5236.31 1050.38 16897.11 00:11:15.314 PCIE (0000:00:12.0) NSID 3 from core 2: 3055.13 11.93 5235.78 873.99 13580.93 00:11:15.314 ======================================================== 00:11:15.314 Total : 18330.79 71.60 5235.92 873.99 16897.11 00:11:15.314 00:11:15.314 01:21:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81563 00:11:17.213 Initializing NVMe Controllers 00:11:17.213 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:17.213 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:17.213 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:17.213 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:17.213 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:17.213 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:17.213 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:17.213 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:17.213 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:17.213 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:17.213 Initialization complete. Launching workers. 00:11:17.213 ======================================================== 00:11:17.213 Latency(us) 00:11:17.213 Device Information : IOPS MiB/s Average min max 00:11:17.213 PCIE (0000:00:10.0) NSID 1 from core 0: 8228.57 32.14 1943.06 980.64 9515.93 00:11:17.213 PCIE (0000:00:11.0) NSID 1 from core 0: 8228.57 32.14 1944.03 1011.30 9130.08 00:11:17.213 PCIE (0000:00:13.0) NSID 1 from core 0: 8228.57 32.14 1943.99 909.84 8083.78 00:11:17.213 PCIE (0000:00:12.0) NSID 1 from core 0: 8228.57 32.14 1943.96 765.69 7918.72 00:11:17.213 PCIE (0000:00:12.0) NSID 2 from core 0: 8228.57 32.14 1943.93 656.13 8603.76 00:11:17.213 PCIE (0000:00:12.0) NSID 3 from core 0: 8228.57 32.14 1943.91 517.31 9503.30 00:11:17.213 ======================================================== 00:11:17.213 Total : 49371.45 192.86 1943.81 517.31 9515.93 00:11:17.213 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81564 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81633 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81634 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:17.213 01:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:20.498 Initializing NVMe Controllers 00:11:20.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:20.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:20.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:20.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:20.498 Initialization complete. Launching workers. 00:11:20.498 ======================================================== 00:11:20.498 Latency(us) 00:11:20.498 Device Information : IOPS MiB/s Average min max 00:11:20.498 PCIE (0000:00:10.0) NSID 1 from core 0: 5638.10 22.02 2835.81 996.93 5944.69 00:11:20.498 PCIE (0000:00:11.0) NSID 1 from core 0: 5638.10 22.02 2837.38 1033.45 5612.57 00:11:20.498 PCIE (0000:00:13.0) NSID 1 from core 0: 5638.10 22.02 2837.42 1039.57 5849.15 00:11:20.498 PCIE (0000:00:12.0) NSID 1 from core 0: 5638.10 22.02 2837.41 1035.30 5450.30 00:11:20.498 PCIE (0000:00:12.0) NSID 2 from core 0: 5638.10 22.02 2837.74 1018.19 5565.50 00:11:20.498 PCIE (0000:00:12.0) NSID 3 from core 0: 5638.10 22.02 2837.79 1033.61 5487.62 00:11:20.498 ======================================================== 00:11:20.498 Total : 33828.60 132.14 2837.26 996.93 5944.69 00:11:20.498 00:11:20.498 Initializing NVMe Controllers 00:11:20.498 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:20.498 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:20.498 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:20.498 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:20.498 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:20.498 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:20.498 Initialization complete. Launching workers. 00:11:20.498 ======================================================== 00:11:20.498 Latency(us) 00:11:20.499 Device Information : IOPS MiB/s Average min max 00:11:20.499 PCIE (0000:00:10.0) NSID 1 from core 1: 5518.29 21.56 2897.05 986.31 6791.11 00:11:20.499 PCIE (0000:00:11.0) NSID 1 from core 1: 5518.29 21.56 2898.72 1001.28 6832.73 00:11:20.499 PCIE (0000:00:13.0) NSID 1 from core 1: 5518.29 21.56 2898.61 830.84 6804.34 00:11:20.499 PCIE (0000:00:12.0) NSID 1 from core 1: 5518.29 21.56 2898.51 692.26 6268.74 00:11:20.499 PCIE (0000:00:12.0) NSID 2 from core 1: 5518.29 21.56 2898.41 585.08 5996.65 00:11:20.499 PCIE (0000:00:12.0) NSID 3 from core 1: 5518.29 21.56 2898.32 480.03 6987.72 00:11:20.499 ======================================================== 00:11:20.499 Total : 33109.71 129.33 2898.27 480.03 6987.72 00:11:20.499 00:11:22.415 Initializing NVMe Controllers 00:11:22.415 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.415 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:22.415 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:22.415 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:22.415 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:22.415 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:22.415 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:22.415 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:22.415 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:22.415 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:22.415 Initialization complete. Launching workers. 00:11:22.415 ======================================================== 00:11:22.415 Latency(us) 00:11:22.415 Device Information : IOPS MiB/s Average min max 00:11:22.415 PCIE (0000:00:10.0) NSID 1 from core 2: 3157.78 12.34 5065.41 1240.90 11907.86 00:11:22.415 PCIE (0000:00:11.0) NSID 1 from core 2: 3157.78 12.34 5066.67 1257.30 11692.67 00:11:22.415 PCIE (0000:00:13.0) NSID 1 from core 2: 3157.78 12.34 5066.49 1276.40 12299.27 00:11:22.415 PCIE (0000:00:12.0) NSID 1 from core 2: 3157.78 12.34 5066.64 1246.23 11965.86 00:11:22.415 PCIE (0000:00:12.0) NSID 2 from core 2: 3157.78 12.34 5066.53 1207.02 12006.44 00:11:22.415 PCIE (0000:00:12.0) NSID 3 from core 2: 3157.78 12.34 5066.17 771.28 12200.95 00:11:22.415 ======================================================== 00:11:22.415 Total : 18946.66 74.01 5066.32 771.28 12299.27 00:11:22.415 00:11:22.415 ************************************ 00:11:22.415 END TEST nvme_multi_secondary 00:11:22.415 ************************************ 00:11:22.415 01:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81633 00:11:22.415 01:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81634 00:11:22.415 00:11:22.415 real 0m10.713s 00:11:22.415 user 0m18.401s 00:11:22.415 sys 0m0.965s 00:11:22.415 01:21:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:22.415 01:21:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:22.415 01:21:07 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:22.415 01:21:07 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:22.415 01:21:07 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80583 ]] 00:11:22.415 01:21:07 nvme -- common/autotest_common.sh@1086 -- # kill 80583 00:11:22.415 01:21:07 nvme -- common/autotest_common.sh@1087 -- # wait 80583 00:11:22.415 [2024-07-21 01:21:07.699295] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.699429] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.699480] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.699530] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.700860] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.700947] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.700992] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.701041] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.702258] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.702357] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.702404] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.702460] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.703577] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.703671] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.703716] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.415 [2024-07-21 01:21:07.703766] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81510) is not found. Dropping the request. 00:11:22.674 [2024-07-21 01:21:07.824990] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:22.674 01:21:07 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:11:22.674 01:21:07 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:11:22.674 01:21:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:22.674 01:21:07 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:22.674 01:21:07 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:22.674 01:21:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:22.674 ************************************ 00:11:22.674 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:22.674 ************************************ 00:11:22.674 01:21:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:22.934 * Looking for test storage... 00:11:22.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81794 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81794 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81794 ']' 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:22.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:22.934 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 [2024-07-21 01:21:08.225692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:22.934 [2024-07-21 01:21:08.225813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81794 ] 00:11:23.193 [2024-07-21 01:21:08.412868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.193 [2024-07-21 01:21:08.478671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.193 [2024-07-21 01:21:08.478964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.193 [2024-07-21 01:21:08.479044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.193 [2024-07-21 01:21:08.479098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.758 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:23.758 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:11:23.758 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:23.758 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.758 01:21:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:23.758 nvme0n1 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Yt7kt.txt 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.758 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:24.016 true 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721524869 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81816 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:24.016 01:21:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:25.961 [2024-07-21 01:21:11.101801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:25.961 [2024-07-21 01:21:11.102166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:25.961 [2024-07-21 01:21:11.102198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:25.961 [2024-07-21 01:21:11.102216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.961 [2024-07-21 01:21:11.104301] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:25.961 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81816 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81816 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81816 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Yt7kt.txt 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Yt7kt.txt 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81794 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81794 ']' 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81794 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:11:25.961 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81794 00:11:25.962 killing process with pid 81794 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81794' 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81794 00:11:25.962 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81794 00:11:26.896 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:26.896 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:26.896 00:11:26.896 real 0m3.989s 00:11:26.896 user 0m13.363s 00:11:26.896 sys 0m0.863s 00:11:26.896 ************************************ 00:11:26.896 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:26.896 ************************************ 00:11:26.896 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.896 01:21:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:26.896 01:21:11 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:26.896 01:21:11 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:26.896 01:21:11 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:26.896 01:21:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.896 01:21:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.896 ************************************ 00:11:26.896 START TEST nvme_fio 00:11:26.896 ************************************ 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:11:26.896 01:21:11 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:26.896 01:21:11 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:26.896 01:21:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:26.896 01:21:11 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:26.896 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:26.896 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:26.896 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:26.896 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:26.896 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:26.896 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:26.896 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:27.160 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:27.160 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:27.419 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:27.419 01:21:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:27.419 01:21:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:27.677 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:27.677 fio-3.35 00:11:27.677 Starting 1 thread 00:11:31.863 00:11:31.863 test: (groupid=0, jobs=1): err= 0: pid=81950: Sun Jul 21 01:21:16 2024 00:11:31.863 read: IOPS=22.5k, BW=87.9MiB/s (92.1MB/s)(176MiB/2001msec) 00:11:31.863 slat (nsec): min=3782, max=83806, avg=4713.31, stdev=1403.16 00:11:31.863 clat (usec): min=222, max=13141, avg=2839.94, stdev=465.96 00:11:31.863 lat (usec): min=227, max=13222, avg=2844.66, stdev=466.61 00:11:31.863 clat percentiles (usec): 00:11:31.863 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:31.863 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:31.863 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:31.863 | 99.00th=[ 4817], 99.50th=[ 5932], 99.90th=[ 8356], 99.95th=[10421], 00:11:31.863 | 99.99th=[12911] 00:11:31.863 bw ( KiB/s): min=88854, max=91096, per=99.61%, avg=89631.33, stdev=1269.24, samples=3 00:11:31.863 iops : min=22213, max=22774, avg=22407.67, stdev=317.46, samples=3 00:11:31.863 write: IOPS=22.4k, BW=87.4MiB/s (91.6MB/s)(175MiB/2001msec); 0 zone resets 00:11:31.863 slat (nsec): min=3869, max=41391, avg=4995.36, stdev=1205.62 00:11:31.863 clat (usec): min=174, max=13005, avg=2847.72, stdev=469.75 00:11:31.863 lat (usec): min=179, max=13019, avg=2852.72, stdev=470.36 00:11:31.863 clat percentiles (usec): 00:11:31.863 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:31.863 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:31.863 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:31.863 | 99.00th=[ 4817], 99.50th=[ 5997], 99.90th=[ 8717], 99.95th=[10814], 00:11:31.863 | 99.99th=[12649] 00:11:31.863 bw ( KiB/s): min=88504, max=90624, per=100.00%, avg=89753.00, stdev=1109.40, samples=3 00:11:31.863 iops : min=22126, max=22656, avg=22438.00, stdev=277.22, samples=3 00:11:31.863 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:31.863 lat (msec) : 2=0.32%, 4=98.29%, 10=1.29%, 20=0.06% 00:11:31.863 cpu : usr=99.30%, sys=0.05%, ctx=8, majf=0, minf=625 00:11:31.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.863 issued rwts: total=45013,44748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.863 00:11:31.863 Run status group 0 (all jobs): 00:11:31.863 READ: bw=87.9MiB/s (92.1MB/s), 87.9MiB/s-87.9MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:31.863 WRITE: bw=87.4MiB/s (91.6MB/s), 87.4MiB/s-87.4MiB/s (91.6MB/s-91.6MB/s), io=175MiB (183MB), run=2001-2001msec 00:11:31.863 ----------------------------------------------------- 00:11:31.863 Suppressions used: 00:11:31.863 count bytes template 00:11:31.863 1 32 /usr/src/fio/parse.c 00:11:31.863 1 8 libtcmalloc_minimal.so 00:11:31.863 ----------------------------------------------------- 00:11:31.863 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:31.863 01:21:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:31.863 01:21:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:31.863 01:21:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:31.863 01:21:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:32.123 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:32.123 fio-3.35 00:11:32.123 Starting 1 thread 00:11:36.313 00:11:36.313 test: (groupid=0, jobs=1): err= 0: pid=82015: Sun Jul 21 01:21:20 2024 00:11:36.313 read: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(184MiB/2001msec) 00:11:36.313 slat (nsec): min=3794, max=63344, avg=4280.95, stdev=1064.47 00:11:36.313 clat (usec): min=210, max=15045, avg=2710.96, stdev=374.56 00:11:36.313 lat (usec): min=214, max=15102, avg=2715.24, stdev=375.02 00:11:36.313 clat percentiles (usec): 00:11:36.313 | 1.00th=[ 2442], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:11:36.313 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:11:36.313 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:11:36.313 | 99.00th=[ 3195], 99.50th=[ 3818], 99.90th=[ 9110], 99.95th=[12256], 00:11:36.313 | 99.99th=[14746] 00:11:36.313 bw ( KiB/s): min=91824, max=95160, per=99.48%, avg=93816.00, stdev=1759.87, samples=3 00:11:36.313 iops : min=22958, max=23790, avg=23454.67, stdev=438.84, samples=3 00:11:36.313 write: IOPS=23.4k, BW=91.5MiB/s (95.9MB/s)(183MiB/2001msec); 0 zone resets 00:11:36.313 slat (nsec): min=3919, max=44779, avg=4501.77, stdev=1017.42 00:11:36.313 clat (usec): min=185, max=14807, avg=2718.60, stdev=390.41 00:11:36.313 lat (usec): min=189, max=14821, avg=2723.11, stdev=390.84 00:11:36.313 clat percentiles (usec): 00:11:36.313 | 1.00th=[ 2442], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2606], 00:11:36.313 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:11:36.313 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:11:36.313 | 99.00th=[ 3228], 99.50th=[ 3851], 99.90th=[10290], 99.95th=[12518], 00:11:36.313 | 99.99th=[14484] 00:11:36.313 bw ( KiB/s): min=91568, max=96096, per=100.00%, avg=93893.33, stdev=2266.49, samples=3 00:11:36.313 iops : min=22892, max=24024, avg=23473.33, stdev=566.62, samples=3 00:11:36.313 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:36.313 lat (msec) : 2=0.05%, 4=99.48%, 10=0.33%, 20=0.10% 00:11:36.313 cpu : usr=99.25%, sys=0.15%, ctx=13, majf=0, minf=625 00:11:36.313 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:36.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.313 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.313 issued rwts: total=47179,46856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.313 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.313 00:11:36.313 Run status group 0 (all jobs): 00:11:36.313 READ: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=184MiB (193MB), run=2001-2001msec 00:11:36.313 WRITE: bw=91.5MiB/s (95.9MB/s), 91.5MiB/s-91.5MiB/s (95.9MB/s-95.9MB/s), io=183MiB (192MB), run=2001-2001msec 00:11:36.313 ----------------------------------------------------- 00:11:36.313 Suppressions used: 00:11:36.313 count bytes template 00:11:36.313 1 32 /usr/src/fio/parse.c 00:11:36.313 1 8 libtcmalloc_minimal.so 00:11:36.313 ----------------------------------------------------- 00:11:36.313 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:36.313 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:36.572 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:36.572 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:36.573 01:21:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:36.832 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:36.832 fio-3.35 00:11:36.832 Starting 1 thread 00:11:41.015 00:11:41.015 test: (groupid=0, jobs=1): err= 0: pid=82076: Sun Jul 21 01:21:25 2024 00:11:41.015 read: IOPS=23.0k, BW=90.0MiB/s (94.3MB/s)(180MiB/2001msec) 00:11:41.015 slat (nsec): min=3755, max=68507, avg=4565.95, stdev=1272.90 00:11:41.015 clat (usec): min=226, max=14039, avg=2769.76, stdev=552.73 00:11:41.015 lat (usec): min=231, max=14108, avg=2774.33, stdev=553.55 00:11:41.015 clat percentiles (usec): 00:11:41.015 | 1.00th=[ 2311], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:11:41.015 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2769], 00:11:41.015 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2999], 00:11:41.015 | 99.00th=[ 5276], 99.50th=[ 7046], 99.90th=[ 8586], 99.95th=[11207], 00:11:41.015 | 99.99th=[13698] 00:11:41.015 bw ( KiB/s): min=87440, max=98680, per=100.00%, avg=93080.00, stdev=5620.11, samples=3 00:11:41.015 iops : min=21860, max=24670, avg=23270.00, stdev=1405.03, samples=3 00:11:41.015 write: IOPS=22.9k, BW=89.4MiB/s (93.8MB/s)(179MiB/2001msec); 0 zone resets 00:11:41.015 slat (nsec): min=3856, max=69803, avg=4853.18, stdev=1402.75 00:11:41.015 clat (usec): min=268, max=13783, avg=2785.16, stdev=575.18 00:11:41.015 lat (usec): min=273, max=13797, avg=2790.01, stdev=576.03 00:11:41.015 clat percentiles (usec): 00:11:41.015 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:11:41.015 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2769], 00:11:41.015 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:11:41.015 | 99.00th=[ 5342], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[11600], 00:11:41.015 | 99.99th=[13435] 00:11:41.015 bw ( KiB/s): min=88728, max=98064, per=100.00%, avg=93202.67, stdev=4680.00, samples=3 00:11:41.015 iops : min=22182, max=24516, avg=23300.67, stdev=1170.00, samples=3 00:11:41.015 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:41.015 lat (msec) : 2=0.05%, 4=98.13%, 10=1.71%, 20=0.07% 00:11:41.016 cpu : usr=99.45%, sys=0.00%, ctx=2, majf=0, minf=627 00:11:41.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:41.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.016 issued rwts: total=46088,45819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.016 00:11:41.016 Run status group 0 (all jobs): 00:11:41.016 READ: bw=90.0MiB/s (94.3MB/s), 90.0MiB/s-90.0MiB/s (94.3MB/s-94.3MB/s), io=180MiB (189MB), run=2001-2001msec 00:11:41.016 WRITE: bw=89.4MiB/s (93.8MB/s), 89.4MiB/s-89.4MiB/s (93.8MB/s-93.8MB/s), io=179MiB (188MB), run=2001-2001msec 00:11:41.016 ----------------------------------------------------- 00:11:41.016 Suppressions used: 00:11:41.016 count bytes template 00:11:41.016 1 32 /usr/src/fio/parse.c 00:11:41.016 1 8 libtcmalloc_minimal.so 00:11:41.016 ----------------------------------------------------- 00:11:41.016 00:11:41.016 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:41.016 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:41.016 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.016 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:41.273 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.273 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:41.531 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:41.531 01:21:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:41.531 01:21:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:41.531 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:41.531 fio-3.35 00:11:41.531 Starting 1 thread 00:11:45.714 00:11:45.714 test: (groupid=0, jobs=1): err= 0: pid=82137: Sun Jul 21 01:21:30 2024 00:11:45.714 read: IOPS=21.9k, BW=85.5MiB/s (89.7MB/s)(171MiB/2001msec) 00:11:45.714 slat (usec): min=3, max=565, avg= 4.51, stdev= 3.14 00:11:45.714 clat (usec): min=211, max=12072, avg=2920.36, stdev=438.10 00:11:45.714 lat (usec): min=216, max=12129, avg=2924.87, stdev=438.81 00:11:45.714 clat percentiles (usec): 00:11:45.714 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:45.714 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:11:45.715 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3130], 00:11:45.715 | 99.00th=[ 4555], 99.50th=[ 5997], 99.90th=[ 8455], 99.95th=[ 9372], 00:11:45.715 | 99.99th=[11731] 00:11:45.715 bw ( KiB/s): min=84688, max=89160, per=99.92%, avg=87504.00, stdev=2451.30, samples=3 00:11:45.715 iops : min=21172, max=22290, avg=21876.00, stdev=612.83, samples=3 00:11:45.715 write: IOPS=21.7k, BW=84.9MiB/s (89.1MB/s)(170MiB/2001msec); 0 zone resets 00:11:45.715 slat (nsec): min=3925, max=36058, avg=4718.42, stdev=1235.77 00:11:45.715 clat (usec): min=219, max=11901, avg=2927.38, stdev=426.02 00:11:45.715 lat (usec): min=224, max=11914, avg=2932.10, stdev=426.64 00:11:45.715 clat percentiles (usec): 00:11:45.715 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:11:45.715 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:11:45.715 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:11:45.715 | 99.00th=[ 4555], 99.50th=[ 5866], 99.90th=[ 8455], 99.95th=[ 9634], 00:11:45.715 | 99.99th=[11469] 00:11:45.715 bw ( KiB/s): min=84552, max=89984, per=100.00%, avg=87642.67, stdev=2792.45, samples=3 00:11:45.715 iops : min=21138, max=22496, avg=21910.67, stdev=698.11, samples=3 00:11:45.715 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:45.715 lat (msec) : 2=0.08%, 4=98.38%, 10=1.45%, 20=0.04% 00:11:45.715 cpu : usr=99.00%, sys=0.15%, ctx=15, majf=0, minf=623 00:11:45.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:45.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.715 issued rwts: total=43808,43507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.715 00:11:45.715 Run status group 0 (all jobs): 00:11:45.715 READ: bw=85.5MiB/s (89.7MB/s), 85.5MiB/s-85.5MiB/s (89.7MB/s-89.7MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:45.715 WRITE: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:45.715 ----------------------------------------------------- 00:11:45.715 Suppressions used: 00:11:45.715 count bytes template 00:11:45.715 1 32 /usr/src/fio/parse.c 00:11:45.715 1 8 libtcmalloc_minimal.so 00:11:45.715 ----------------------------------------------------- 00:11:45.715 00:11:45.715 01:21:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:45.715 01:21:30 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:45.715 00:11:45.715 real 0m18.755s 00:11:45.715 user 0m14.689s 00:11:45.715 sys 0m3.658s 00:11:45.715 01:21:30 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:45.715 ************************************ 00:11:45.715 END TEST nvme_fio 00:11:45.715 01:21:30 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 ************************************ 00:11:45.715 00:11:45.715 real 1m30.563s 00:11:45.715 user 3m32.910s 00:11:45.715 sys 0m21.450s 00:11:45.715 01:21:30 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:45.715 01:21:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 ************************************ 00:11:45.715 END TEST nvme 00:11:45.715 ************************************ 00:11:45.715 01:21:30 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:45.715 01:21:30 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:45.715 01:21:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:45.715 01:21:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.715 01:21:30 -- common/autotest_common.sh@10 -- # set +x 00:11:45.715 ************************************ 00:11:45.715 START TEST nvme_scc 00:11:45.715 ************************************ 00:11:45.715 01:21:30 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:45.715 * Looking for test storage... 00:11:45.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:45.715 01:21:30 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.715 01:21:30 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.715 01:21:30 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.715 01:21:30 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.715 01:21:30 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.715 01:21:30 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.715 01:21:30 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.715 01:21:30 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:45.715 01:21:30 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:45.715 01:21:30 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:45.715 01:21:30 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:45.715 01:21:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:45.715 01:21:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:45.715 01:21:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:45.715 01:21:30 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:46.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.558 Waiting for block devices as requested 00:11:46.816 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.816 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.816 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.073 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.349 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.349 01:21:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:52.349 01:21:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:52.349 01:21:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:52.349 01:21:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:52.349 01:21:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.349 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:52.350 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:52.351 01:21:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.352 01:21:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:52.352 01:21:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:52.352 01:21:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.365 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:52.366 01:21:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:52.366 01:21:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:52.366 01:21:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:52.367 01:21:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:52.367 01:21:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.367 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:52.368 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.369 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:52.370 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:52.371 01:21:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:52.371 01:21:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:52.371 01:21:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:52.371 01:21:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.371 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.372 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:52.373 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.374 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.375 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.376 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:52.377 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.378 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.640 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:52.641 01:21:37 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:52.641 01:21:37 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:52.641 01:21:37 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:52.641 01:21:37 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.641 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.642 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.643 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:52.644 01:21:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:52.644 01:21:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:52.645 01:21:37 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:52.645 01:21:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:52.645 01:21:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:52.645 01:21:37 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.780 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.038 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.038 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.038 01:21:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:54.038 01:21:39 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:54.038 01:21:39 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.038 01:21:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:54.297 ************************************ 00:11:54.297 START TEST nvme_simple_copy 00:11:54.297 ************************************ 00:11:54.297 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:54.555 Initializing NVMe Controllers 00:11:54.555 Attaching to 0000:00:10.0 00:11:54.555 Controller supports SCC. Attached to 0000:00:10.0 00:11:54.555 Namespace ID: 1 size: 6GB 00:11:54.555 Initialization complete. 00:11:54.555 00:11:54.555 Controller QEMU NVMe Ctrl (12340 ) 00:11:54.555 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:54.555 Namespace Block Size:4096 00:11:54.555 Writing LBAs 0 to 63 with Random Data 00:11:54.555 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:54.555 LBAs matching Written Data: 64 00:11:54.555 00:11:54.556 real 0m0.291s 00:11:54.556 user 0m0.098s 00:11:54.556 sys 0m0.092s 00:11:54.556 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.556 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.556 ************************************ 00:11:54.556 END TEST nvme_simple_copy 00:11:54.556 ************************************ 00:11:54.556 ************************************ 00:11:54.556 END TEST nvme_scc 00:11:54.556 ************************************ 00:11:54.556 00:11:54.556 real 0m8.878s 00:11:54.556 user 0m1.467s 00:11:54.556 sys 0m2.443s 00:11:54.556 01:21:39 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.556 01:21:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:54.556 01:21:39 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:54.556 01:21:39 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:54.556 01:21:39 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:54.556 01:21:39 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:54.556 01:21:39 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:54.556 01:21:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:54.556 01:21:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.556 01:21:39 -- common/autotest_common.sh@10 -- # set +x 00:11:54.556 ************************************ 00:11:54.556 START TEST nvme_fdp 00:11:54.556 ************************************ 00:11:54.556 01:21:39 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:11:54.815 * Looking for test storage... 00:11:54.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:54.815 01:21:39 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:54.815 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:54.815 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:54.815 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:54.815 01:21:39 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.815 01:21:39 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.815 01:21:39 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.816 01:21:39 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.816 01:21:39 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.816 01:21:39 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.816 01:21:39 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.816 01:21:39 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:54.816 01:21:39 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:54.816 01:21:39 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:54.816 01:21:39 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.816 01:21:39 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:55.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.642 Waiting for block devices as requested 00:11:55.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.899 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:55.899 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.177 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:01.177 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:01.177 01:21:46 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:01.177 01:21:46 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:01.177 01:21:46 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:01.177 01:21:46 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.177 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.178 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:01.179 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.180 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:01.181 01:21:46 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:01.181 01:21:46 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:01.181 01:21:46 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:01.181 01:21:46 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:01.181 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:01.182 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.183 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:01.184 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.185 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:01.186 01:21:46 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:01.186 01:21:46 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:01.186 01:21:46 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:01.186 01:21:46 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.186 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.452 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.453 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.454 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.455 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:01.456 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.457 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.458 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.459 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:01.460 01:21:46 nvme_fdp -- scripts/common.sh@15 -- # local i 00:12:01.460 01:21:46 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:01.460 01:21:46 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:01.460 01:21:46 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.460 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:01.461 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.462 01:21:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:12:01.463 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:01.463 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:01.463 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:02.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:02.964 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:02.964 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:02.964 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:02.964 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.223 01:21:48 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:03.223 01:21:48 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:03.223 01:21:48 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.223 01:21:48 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:03.223 ************************************ 00:12:03.223 START TEST nvme_flexible_data_placement 00:12:03.223 ************************************ 00:12:03.223 01:21:48 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:03.483 Initializing NVMe Controllers 00:12:03.483 Attaching to 0000:00:13.0 00:12:03.483 Controller supports FDP Attached to 0000:00:13.0 00:12:03.483 Namespace ID: 1 Endurance Group ID: 1 00:12:03.483 Initialization complete. 00:12:03.483 00:12:03.483 ================================== 00:12:03.483 == FDP tests for Namespace: #01 == 00:12:03.483 ================================== 00:12:03.483 00:12:03.483 Get Feature: FDP: 00:12:03.483 ================= 00:12:03.483 Enabled: Yes 00:12:03.483 FDP configuration Index: 0 00:12:03.483 00:12:03.483 FDP configurations log page 00:12:03.483 =========================== 00:12:03.483 Number of FDP configurations: 1 00:12:03.483 Version: 0 00:12:03.483 Size: 112 00:12:03.483 FDP Configuration Descriptor: 0 00:12:03.483 Descriptor Size: 96 00:12:03.483 Reclaim Group Identifier format: 2 00:12:03.483 FDP Volatile Write Cache: Not Present 00:12:03.483 FDP Configuration: Valid 00:12:03.483 Vendor Specific Size: 0 00:12:03.484 Number of Reclaim Groups: 2 00:12:03.484 Number of Recalim Unit Handles: 8 00:12:03.484 Max Placement Identifiers: 128 00:12:03.484 Number of Namespaces Suppprted: 256 00:12:03.484 Reclaim unit Nominal Size: 6000000 bytes 00:12:03.484 Estimated Reclaim Unit Time Limit: Not Reported 00:12:03.484 RUH Desc #000: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #001: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #002: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #003: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #004: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #005: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #006: RUH Type: Initially Isolated 00:12:03.484 RUH Desc #007: RUH Type: Initially Isolated 00:12:03.484 00:12:03.484 FDP reclaim unit handle usage log page 00:12:03.484 ====================================== 00:12:03.484 Number of Reclaim Unit Handles: 8 00:12:03.484 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:03.484 RUH Usage Desc #001: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #002: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #003: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #004: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #005: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #006: RUH Attributes: Unused 00:12:03.484 RUH Usage Desc #007: RUH Attributes: Unused 00:12:03.484 00:12:03.484 FDP statistics log page 00:12:03.484 ======================= 00:12:03.484 Host bytes with metadata written: 1708515328 00:12:03.484 Media bytes with metadata written: 1709293568 00:12:03.484 Media bytes erased: 0 00:12:03.484 00:12:03.484 FDP Reclaim unit handle status 00:12:03.484 ============================== 00:12:03.484 Number of RUHS descriptors: 2 00:12:03.484 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000002a2 00:12:03.484 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:03.484 00:12:03.484 FDP write on placement id: 0 success 00:12:03.484 00:12:03.484 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:03.484 00:12:03.484 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:03.484 00:12:03.484 Get Feature: FDP Events for Placement handle: #0 00:12:03.484 ======================== 00:12:03.484 Number of FDP Events: 6 00:12:03.484 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:03.484 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:03.484 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:03.484 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:03.484 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:03.484 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:03.484 00:12:03.484 FDP events log page 00:12:03.484 =================== 00:12:03.484 Number of FDP events: 1 00:12:03.484 FDP Event #0: 00:12:03.484 Event Type: RU Not Written to Capacity 00:12:03.484 Placement Identifier: Valid 00:12:03.484 NSID: Valid 00:12:03.484 Location: Valid 00:12:03.484 Placement Identifier: 0 00:12:03.484 Event Timestamp: 3 00:12:03.484 Namespace Identifier: 1 00:12:03.484 Reclaim Group Identifier: 0 00:12:03.484 Reclaim Unit Handle Identifier: 0 00:12:03.484 00:12:03.484 FDP test passed 00:12:03.484 00:12:03.484 real 0m0.267s 00:12:03.484 user 0m0.072s 00:12:03.484 sys 0m0.095s 00:12:03.484 ************************************ 00:12:03.484 END TEST nvme_flexible_data_placement 00:12:03.484 ************************************ 00:12:03.484 01:21:48 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.484 01:21:48 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:03.484 ************************************ 00:12:03.484 END TEST nvme_fdp 00:12:03.484 ************************************ 00:12:03.484 00:12:03.484 real 0m8.923s 00:12:03.484 user 0m1.412s 00:12:03.484 sys 0m2.500s 00:12:03.484 01:21:48 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.484 01:21:48 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:03.484 01:21:48 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:12:03.484 01:21:48 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:03.484 01:21:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:03.484 01:21:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.484 01:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:03.484 ************************************ 00:12:03.484 START TEST nvme_rpc 00:12:03.484 ************************************ 00:12:03.484 01:21:48 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:03.743 * Looking for test storage... 00:12:03.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:03.743 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:03.743 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:03.743 01:21:48 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:03.744 01:21:48 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:03.744 01:21:48 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:12:03.744 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:03.744 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83492 00:12:03.744 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:03.744 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:03.744 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83492 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 83492 ']' 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:03.744 01:21:49 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.003 [2024-07-21 01:21:49.126341] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:04.003 [2024-07-21 01:21:49.126661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:12:04.003 [2024-07-21 01:21:49.301897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.261 [2024-07-21 01:21:49.366874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.261 [2024-07-21 01:21:49.366974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.827 01:21:49 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.827 01:21:49 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:04.827 01:21:49 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:04.827 Nvme0n1 00:12:05.084 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:05.084 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:05.084 request: 00:12:05.084 { 00:12:05.084 "filename": "non_existing_file", 00:12:05.084 "bdev_name": "Nvme0n1", 00:12:05.085 "method": "bdev_nvme_apply_firmware", 00:12:05.085 "req_id": 1 00:12:05.085 } 00:12:05.085 Got JSON-RPC error response 00:12:05.085 response: 00:12:05.085 { 00:12:05.085 "code": -32603, 00:12:05.085 "message": "open file failed." 00:12:05.085 } 00:12:05.085 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:05.085 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:05.085 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:05.342 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:05.342 01:21:50 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83492 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 83492 ']' 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 83492 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83492 00:12:05.342 killing process with pid 83492 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83492' 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@965 -- # kill 83492 00:12:05.342 01:21:50 nvme_rpc -- common/autotest_common.sh@970 -- # wait 83492 00:12:05.909 ************************************ 00:12:05.909 END TEST nvme_rpc 00:12:05.909 ************************************ 00:12:05.909 00:12:05.909 real 0m2.370s 00:12:05.909 user 0m4.053s 00:12:05.909 sys 0m0.816s 00:12:05.909 01:21:51 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:05.909 01:21:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.909 01:21:51 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:05.909 01:21:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:05.909 01:21:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.909 01:21:51 -- common/autotest_common.sh@10 -- # set +x 00:12:06.188 ************************************ 00:12:06.189 START TEST nvme_rpc_timeouts 00:12:06.189 ************************************ 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:06.189 * Looking for test storage... 00:12:06.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83548 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83548 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83573 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:06.189 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83573 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 83573 ']' 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:06.189 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:06.189 [2024-07-21 01:21:51.466984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:06.189 [2024-07-21 01:21:51.467322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83573 ] 00:12:06.477 [2024-07-21 01:21:51.641866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.477 [2024-07-21 01:21:51.705739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.477 [2024-07-21 01:21:51.705884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.044 Checking default timeout settings: 00:12:07.044 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:07.044 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:12:07.044 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:07.044 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:07.302 Making settings changes with rpc: 00:12:07.302 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:07.302 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:07.560 Check default vs. modified settings: 00:12:07.560 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:07.560 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83548 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83548 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.819 Setting action_on_timeout is changed as expected. 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83548 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.819 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83548 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.820 Setting timeout_us is changed as expected. 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83548 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83548 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:07.820 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:08.079 Setting timeout_admin_us is changed as expected. 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83548 /tmp/settings_modified_83548 00:12:08.079 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83573 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 83573 ']' 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 83573 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83573 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:08.079 killing process with pid 83573 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83573' 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 83573 00:12:08.079 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 83573 00:12:08.646 RPC TIMEOUT SETTING TEST PASSED. 00:12:08.646 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:08.646 00:12:08.646 real 0m2.538s 00:12:08.646 user 0m4.610s 00:12:08.646 sys 0m0.838s 00:12:08.646 ************************************ 00:12:08.646 END TEST nvme_rpc_timeouts 00:12:08.646 ************************************ 00:12:08.646 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:08.646 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:08.646 01:21:53 -- spdk/autotest.sh@243 -- # uname -s 00:12:08.646 01:21:53 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:12:08.646 01:21:53 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:08.646 01:21:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:08.646 01:21:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:08.646 01:21:53 -- common/autotest_common.sh@10 -- # set +x 00:12:08.646 ************************************ 00:12:08.646 START TEST sw_hotplug 00:12:08.646 ************************************ 00:12:08.646 01:21:53 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:08.905 * Looking for test storage... 00:12:08.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:08.905 01:21:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:09.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.471 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.471 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.471 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.471 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:09.730 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:09.731 01:21:54 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:09.731 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:12:09.731 01:21:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=83912 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:12:09.990 01:21:55 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:09.990 01:21:55 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:09.990 01:21:55 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:09.990 01:21:55 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:09.990 01:21:55 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:09.990 Initializing NVMe Controllers 00:12:09.990 Attaching to 0000:00:10.0 00:12:09.990 Attaching to 0000:00:11.0 00:12:09.990 Attaching to 0000:00:12.0 00:12:09.990 Attaching to 0000:00:13.0 00:12:09.990 Attached to 0000:00:10.0 00:12:09.990 Attached to 0000:00:11.0 00:12:09.990 Attached to 0000:00:13.0 00:12:09.990 Attached to 0000:00:12.0 00:12:09.990 Initialization complete. Starting I/O... 00:12:09.990 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:09.990 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:09.990 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:12:09.990 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:12:09.990 00:12:11.363 QEMU NVMe Ctrl (12340 ): 1356 I/Os completed (+1356) 00:12:11.363 QEMU NVMe Ctrl (12341 ): 1356 I/Os completed (+1356) 00:12:11.363 QEMU NVMe Ctrl (12343 ): 1368 I/Os completed (+1368) 00:12:11.363 QEMU NVMe Ctrl (12342 ): 1366 I/Os completed (+1366) 00:12:11.363 00:12:12.295 QEMU NVMe Ctrl (12340 ): 2820 I/Os completed (+1464) 00:12:12.295 QEMU NVMe Ctrl (12341 ): 2831 I/Os completed (+1475) 00:12:12.295 QEMU NVMe Ctrl (12343 ): 2847 I/Os completed (+1479) 00:12:12.295 QEMU NVMe Ctrl (12342 ): 2850 I/Os completed (+1484) 00:12:12.295 00:12:13.229 QEMU NVMe Ctrl (12340 ): 4644 I/Os completed (+1824) 00:12:13.229 QEMU NVMe Ctrl (12341 ): 4658 I/Os completed (+1827) 00:12:13.229 QEMU NVMe Ctrl (12343 ): 4688 I/Os completed (+1841) 00:12:13.229 QEMU NVMe Ctrl (12342 ): 4690 I/Os completed (+1840) 00:12:13.229 00:12:14.166 QEMU NVMe Ctrl (12340 ): 6552 I/Os completed (+1908) 00:12:14.166 QEMU NVMe Ctrl (12341 ): 6567 I/Os completed (+1909) 00:12:14.166 QEMU NVMe Ctrl (12343 ): 6602 I/Os completed (+1914) 00:12:14.166 QEMU NVMe Ctrl (12342 ): 6600 I/Os completed (+1910) 00:12:14.166 00:12:15.102 QEMU NVMe Ctrl (12340 ): 8220 I/Os completed (+1668) 00:12:15.102 QEMU NVMe Ctrl (12341 ): 8241 I/Os completed (+1674) 00:12:15.102 QEMU NVMe Ctrl (12343 ): 8279 I/Os completed (+1677) 00:12:15.102 QEMU NVMe Ctrl (12342 ): 8309 I/Os completed (+1709) 00:12:15.102 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:16.038 [2024-07-21 01:22:01.068184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:16.038 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:16.038 [2024-07-21 01:22:01.070369] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.070581] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.070647] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.070758] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:16.038 [2024-07-21 01:22:01.073710] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.073898] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.073963] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.074070] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:16.038 [2024-07-21 01:22:01.110496] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:16.038 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:16.038 [2024-07-21 01:22:01.112012] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.112096] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.112201] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.112248] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:16.038 [2024-07-21 01:22:01.114402] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.114517] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.114572] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 [2024-07-21 01:22:01.114654] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:16.038 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:16.038 EAL: Scan for (pci) bus failed. 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:16.038 QEMU NVMe Ctrl (12343 ): 10000 I/Os completed (+1721) 00:12:16.038 QEMU NVMe Ctrl (12342 ): 10044 I/Os completed (+1735) 00:12:16.038 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:16.038 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:16.038 Attaching to 0000:00:10.0 00:12:16.038 Attached to 0000:00:10.0 00:12:16.296 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:16.296 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:16.296 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:16.296 Attaching to 0000:00:11.0 00:12:16.296 Attached to 0000:00:11.0 00:12:17.228 QEMU NVMe Ctrl (12343 ): 11888 I/Os completed (+1888) 00:12:17.228 QEMU NVMe Ctrl (12342 ): 11933 I/Os completed (+1889) 00:12:17.228 QEMU NVMe Ctrl (12340 ): 1750 I/Os completed (+1750) 00:12:17.228 QEMU NVMe Ctrl (12341 ): 1523 I/Os completed (+1523) 00:12:17.228 00:12:18.159 QEMU NVMe Ctrl (12343 ): 13600 I/Os completed (+1712) 00:12:18.159 QEMU NVMe Ctrl (12342 ): 13653 I/Os completed (+1720) 00:12:18.159 QEMU NVMe Ctrl (12340 ): 3486 I/Os completed (+1736) 00:12:18.159 QEMU NVMe Ctrl (12341 ): 3256 I/Os completed (+1733) 00:12:18.159 00:12:19.090 QEMU NVMe Ctrl (12343 ): 15308 I/Os completed (+1708) 00:12:19.090 QEMU NVMe Ctrl (12342 ): 15366 I/Os completed (+1713) 00:12:19.090 QEMU NVMe Ctrl (12340 ): 5210 I/Os completed (+1724) 00:12:19.091 QEMU NVMe Ctrl (12341 ): 4991 I/Os completed (+1735) 00:12:19.091 00:12:20.024 QEMU NVMe Ctrl (12343 ): 17100 I/Os completed (+1792) 00:12:20.024 QEMU NVMe Ctrl (12342 ): 17158 I/Os completed (+1792) 00:12:20.024 QEMU NVMe Ctrl (12340 ): 7005 I/Os completed (+1795) 00:12:20.024 QEMU NVMe Ctrl (12341 ): 6786 I/Os completed (+1795) 00:12:20.024 00:12:20.961 QEMU NVMe Ctrl (12343 ): 18864 I/Os completed (+1764) 00:12:21.220 QEMU NVMe Ctrl (12342 ): 18926 I/Os completed (+1768) 00:12:21.220 QEMU NVMe Ctrl (12340 ): 8786 I/Os completed (+1781) 00:12:21.220 QEMU NVMe Ctrl (12341 ): 8556 I/Os completed (+1770) 00:12:21.220 00:12:22.157 QEMU NVMe Ctrl (12343 ): 20584 I/Os completed (+1720) 00:12:22.157 QEMU NVMe Ctrl (12342 ): 20650 I/Os completed (+1724) 00:12:22.157 QEMU NVMe Ctrl (12340 ): 10517 I/Os completed (+1731) 00:12:22.157 QEMU NVMe Ctrl (12341 ): 10279 I/Os completed (+1723) 00:12:22.157 00:12:23.093 QEMU NVMe Ctrl (12343 ): 22280 I/Os completed (+1696) 00:12:23.093 QEMU NVMe Ctrl (12342 ): 22346 I/Os completed (+1696) 00:12:23.093 QEMU NVMe Ctrl (12340 ): 12243 I/Os completed (+1726) 00:12:23.093 QEMU NVMe Ctrl (12341 ): 12007 I/Os completed (+1728) 00:12:23.093 00:12:24.031 QEMU NVMe Ctrl (12343 ): 24080 I/Os completed (+1800) 00:12:24.031 QEMU NVMe Ctrl (12342 ): 24147 I/Os completed (+1801) 00:12:24.031 QEMU NVMe Ctrl (12340 ): 14055 I/Os completed (+1812) 00:12:24.031 QEMU NVMe Ctrl (12341 ): 13819 I/Os completed (+1812) 00:12:24.031 00:12:24.968 QEMU NVMe Ctrl (12343 ): 25708 I/Os completed (+1628) 00:12:24.968 QEMU NVMe Ctrl (12342 ): 25786 I/Os completed (+1639) 00:12:24.968 QEMU NVMe Ctrl (12340 ): 15703 I/Os completed (+1648) 00:12:24.968 QEMU NVMe Ctrl (12341 ): 15491 I/Os completed (+1672) 00:12:24.968 00:12:26.346 QEMU NVMe Ctrl (12343 ): 27412 I/Os completed (+1704) 00:12:26.346 QEMU NVMe Ctrl (12342 ): 27488 I/Os completed (+1702) 00:12:26.346 QEMU NVMe Ctrl (12340 ): 17414 I/Os completed (+1711) 00:12:26.346 QEMU NVMe Ctrl (12341 ): 17214 I/Os completed (+1723) 00:12:26.346 00:12:27.281 QEMU NVMe Ctrl (12343 ): 29308 I/Os completed (+1896) 00:12:27.281 QEMU NVMe Ctrl (12342 ): 29391 I/Os completed (+1903) 00:12:27.281 QEMU NVMe Ctrl (12340 ): 19317 I/Os completed (+1903) 00:12:27.281 QEMU NVMe Ctrl (12341 ): 19110 I/Os completed (+1896) 00:12:27.281 00:12:28.216 QEMU NVMe Ctrl (12343 ): 31036 I/Os completed (+1728) 00:12:28.216 QEMU NVMe Ctrl (12342 ): 31127 I/Os completed (+1736) 00:12:28.216 QEMU NVMe Ctrl (12340 ): 21074 I/Os completed (+1757) 00:12:28.216 QEMU NVMe Ctrl (12341 ): 20848 I/Os completed (+1738) 00:12:28.216 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:28.216 [2024-07-21 01:22:13.451539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:28.216 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:28.216 [2024-07-21 01:22:13.453572] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.453737] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.453791] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.453896] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:28.216 [2024-07-21 01:22:13.456096] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.456217] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.456267] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.456365] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:28.216 [2024-07-21 01:22:13.490520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.216 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:28.216 [2024-07-21 01:22:13.492182] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.492224] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.492246] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.492265] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:28.216 [2024-07-21 01:22:13.493952] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.493982] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.494007] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 [2024-07-21 01:22:13.494024] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:28.216 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:28.216 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:28.216 EAL: Scan for (pci) bus failed. 00:12:28.474 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:28.475 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:28.475 Attaching to 0000:00:10.0 00:12:28.475 Attached to 0000:00:10.0 00:12:28.734 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:28.734 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:28.734 01:22:13 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:28.734 Attaching to 0000:00:11.0 00:12:28.734 Attached to 0000:00:11.0 00:12:28.993 QEMU NVMe Ctrl (12343 ): 32945 I/Os completed (+1909) 00:12:28.993 QEMU NVMe Ctrl (12342 ): 33036 I/Os completed (+1909) 00:12:28.993 QEMU NVMe Ctrl (12340 ): 959 I/Os completed (+959) 00:12:28.993 QEMU NVMe Ctrl (12341 ): 739 I/Os completed (+739) 00:12:28.993 00:12:30.371 QEMU NVMe Ctrl (12343 ): 34833 I/Os completed (+1888) 00:12:30.371 QEMU NVMe Ctrl (12342 ): 34924 I/Os completed (+1888) 00:12:30.371 QEMU NVMe Ctrl (12340 ): 2863 I/Os completed (+1904) 00:12:30.371 QEMU NVMe Ctrl (12341 ): 2631 I/Os completed (+1892) 00:12:30.371 00:12:31.306 QEMU NVMe Ctrl (12343 ): 36549 I/Os completed (+1716) 00:12:31.306 QEMU NVMe Ctrl (12342 ): 36644 I/Os completed (+1720) 00:12:31.306 QEMU NVMe Ctrl (12340 ): 4589 I/Os completed (+1726) 00:12:31.306 QEMU NVMe Ctrl (12341 ): 4367 I/Os completed (+1736) 00:12:31.306 00:12:32.350 QEMU NVMe Ctrl (12343 ): 38397 I/Os completed (+1848) 00:12:32.350 QEMU NVMe Ctrl (12342 ): 38492 I/Os completed (+1848) 00:12:32.350 QEMU NVMe Ctrl (12340 ): 6444 I/Os completed (+1855) 00:12:32.350 QEMU NVMe Ctrl (12341 ): 6229 I/Os completed (+1862) 00:12:32.350 00:12:33.285 QEMU NVMe Ctrl (12343 ): 40321 I/Os completed (+1924) 00:12:33.285 QEMU NVMe Ctrl (12342 ): 40416 I/Os completed (+1924) 00:12:33.285 QEMU NVMe Ctrl (12340 ): 8368 I/Os completed (+1924) 00:12:33.285 QEMU NVMe Ctrl (12341 ): 8156 I/Os completed (+1927) 00:12:33.285 00:12:34.220 QEMU NVMe Ctrl (12343 ): 42181 I/Os completed (+1860) 00:12:34.220 QEMU NVMe Ctrl (12342 ): 42276 I/Os completed (+1860) 00:12:34.220 QEMU NVMe Ctrl (12340 ): 10241 I/Os completed (+1873) 00:12:34.220 QEMU NVMe Ctrl (12341 ): 10020 I/Os completed (+1864) 00:12:34.220 00:12:35.154 QEMU NVMe Ctrl (12343 ): 43909 I/Os completed (+1728) 00:12:35.154 QEMU NVMe Ctrl (12342 ): 44006 I/Os completed (+1730) 00:12:35.154 QEMU NVMe Ctrl (12340 ): 11977 I/Os completed (+1736) 00:12:35.154 QEMU NVMe Ctrl (12341 ): 11779 I/Os completed (+1759) 00:12:35.154 00:12:36.102 QEMU NVMe Ctrl (12343 ): 45509 I/Os completed (+1600) 00:12:36.102 QEMU NVMe Ctrl (12342 ): 45615 I/Os completed (+1609) 00:12:36.102 QEMU NVMe Ctrl (12340 ): 13586 I/Os completed (+1609) 00:12:36.102 QEMU NVMe Ctrl (12341 ): 13423 I/Os completed (+1644) 00:12:36.102 00:12:37.037 QEMU NVMe Ctrl (12343 ): 47333 I/Os completed (+1824) 00:12:37.037 QEMU NVMe Ctrl (12342 ): 47442 I/Os completed (+1827) 00:12:37.037 QEMU NVMe Ctrl (12340 ): 15416 I/Os completed (+1830) 00:12:37.037 QEMU NVMe Ctrl (12341 ): 15265 I/Os completed (+1842) 00:12:37.037 00:12:37.972 QEMU NVMe Ctrl (12343 ): 49241 I/Os completed (+1908) 00:12:37.972 QEMU NVMe Ctrl (12342 ): 49350 I/Os completed (+1908) 00:12:37.972 QEMU NVMe Ctrl (12340 ): 17336 I/Os completed (+1920) 00:12:37.972 QEMU NVMe Ctrl (12341 ): 17173 I/Os completed (+1908) 00:12:37.972 00:12:39.347 QEMU NVMe Ctrl (12343 ): 51133 I/Os completed (+1892) 00:12:39.347 QEMU NVMe Ctrl (12342 ): 51247 I/Os completed (+1897) 00:12:39.347 QEMU NVMe Ctrl (12340 ): 19237 I/Os completed (+1901) 00:12:39.347 QEMU NVMe Ctrl (12341 ): 19079 I/Os completed (+1906) 00:12:39.347 00:12:40.283 QEMU NVMe Ctrl (12343 ): 53037 I/Os completed (+1904) 00:12:40.283 QEMU NVMe Ctrl (12342 ): 53153 I/Os completed (+1906) 00:12:40.283 QEMU NVMe Ctrl (12340 ): 21145 I/Os completed (+1908) 00:12:40.283 QEMU NVMe Ctrl (12341 ): 20986 I/Os completed (+1907) 00:12:40.283 00:12:40.541 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:40.541 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:40.541 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:40.541 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:40.541 [2024-07-21 01:22:25.836570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:40.541 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:40.541 [2024-07-21 01:22:25.838702] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.838801] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.838865] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.838918] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:40.541 [2024-07-21 01:22:25.841058] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.841182] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.841283] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.541 [2024-07-21 01:22:25.841330] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:40.800 [2024-07-21 01:22:25.878117] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:40.800 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:40.800 [2024-07-21 01:22:25.879960] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.880003] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.880026] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.880043] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:40.800 [2024-07-21 01:22:25.882235] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.882276] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.882297] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 [2024-07-21 01:22:25.882314] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.800 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:40.800 EAL: Scan for (pci) bus failed. 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:40.800 01:22:25 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:40.800 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:40.800 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:40.800 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:40.800 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:40.800 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:40.800 Attaching to 0000:00:10.0 00:12:40.800 Attached to 0000:00:10.0 00:12:41.058 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:41.058 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:41.058 01:22:26 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:41.058 Attaching to 0000:00:11.0 00:12:41.058 Attached to 0000:00:11.0 00:12:41.058 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:12:41.058 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:12:41.058 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:41.058 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:41.058 [2024-07-21 01:22:26.210480] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:53.267 01:22:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:53.267 01:22:38 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:53.267 01:22:38 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.14 00:12:53.267 01:22:38 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.14 00:12:53.267 01:22:38 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.14 00:12:53.267 01:22:38 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.14 2 00:12:53.267 remove_attach_helper took 43.14s to complete (handling 2 nvme drive(s)) 01:22:38 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 83912 00:12:59.837 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (83912) - No such process 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 83912 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=84461 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:59.837 01:22:44 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 84461 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 84461 ']' 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.837 01:22:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.837 [2024-07-21 01:22:44.320836] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:59.837 [2024-07-21 01:22:44.321193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84461 ] 00:12:59.837 [2024-07-21 01:22:44.496011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.837 [2024-07-21 01:22:44.559197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.837 01:22:45 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.837 01:22:45 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:12:59.837 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:59.837 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:12:59.837 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.837 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 Nvme00n1 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 [ 00:13:00.096 { 00:13:00.096 "name": "Nvme00n1", 00:13:00.096 "aliases": [ 00:13:00.096 "a86d8288-d6d7-4657-9790-4ccba857656e" 00:13:00.096 ], 00:13:00.096 "product_name": "NVMe disk", 00:13:00.096 "block_size": 4096, 00:13:00.096 "num_blocks": 1548666, 00:13:00.096 "uuid": "a86d8288-d6d7-4657-9790-4ccba857656e", 00:13:00.096 "md_size": 64, 00:13:00.096 "md_interleave": false, 00:13:00.096 "dif_type": 0, 00:13:00.096 "assigned_rate_limits": { 00:13:00.096 "rw_ios_per_sec": 0, 00:13:00.096 "rw_mbytes_per_sec": 0, 00:13:00.096 "r_mbytes_per_sec": 0, 00:13:00.096 "w_mbytes_per_sec": 0 00:13:00.096 }, 00:13:00.096 "claimed": false, 00:13:00.096 "zoned": false, 00:13:00.096 "supported_io_types": { 00:13:00.096 "read": true, 00:13:00.096 "write": true, 00:13:00.096 "unmap": true, 00:13:00.096 "write_zeroes": true, 00:13:00.096 "flush": true, 00:13:00.096 "reset": true, 00:13:00.096 "compare": true, 00:13:00.096 "compare_and_write": false, 00:13:00.096 "abort": true, 00:13:00.096 "nvme_admin": true, 00:13:00.096 "nvme_io": true 00:13:00.096 }, 00:13:00.096 "driver_specific": { 00:13:00.096 "nvme": [ 00:13:00.096 { 00:13:00.096 "pci_address": "0000:00:10.0", 00:13:00.096 "trid": { 00:13:00.096 "trtype": "PCIe", 00:13:00.096 "traddr": "0000:00:10.0" 00:13:00.096 }, 00:13:00.096 "ctrlr_data": { 00:13:00.096 "cntlid": 0, 00:13:00.096 "vendor_id": "0x1b36", 00:13:00.096 "model_number": "QEMU NVMe Ctrl", 00:13:00.096 "serial_number": "12340", 00:13:00.096 "firmware_revision": "8.0.0", 00:13:00.096 "subnqn": "nqn.2019-08.org.qemu:12340", 00:13:00.096 "oacs": { 00:13:00.096 "security": 0, 00:13:00.096 "format": 1, 00:13:00.096 "firmware": 0, 00:13:00.096 "ns_manage": 1 00:13:00.096 }, 00:13:00.096 "multi_ctrlr": false, 00:13:00.096 "ana_reporting": false 00:13:00.096 }, 00:13:00.096 "vs": { 00:13:00.096 "nvme_version": "1.4" 00:13:00.096 }, 00:13:00.096 "ns_data": { 00:13:00.096 "id": 1, 00:13:00.096 "can_share": false 00:13:00.096 } 00:13:00.096 } 00:13:00.096 ], 00:13:00.096 "mp_policy": "active_passive" 00:13:00.096 } 00:13:00.096 } 00:13:00.096 ] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 Nvme01n1 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 [ 00:13:00.096 { 00:13:00.096 "name": "Nvme01n1", 00:13:00.096 "aliases": [ 00:13:00.096 "3848fb81-0f9d-4cd6-9be9-c677b749656e" 00:13:00.096 ], 00:13:00.096 "product_name": "NVMe disk", 00:13:00.096 "block_size": 4096, 00:13:00.096 "num_blocks": 1310720, 00:13:00.096 "uuid": "3848fb81-0f9d-4cd6-9be9-c677b749656e", 00:13:00.096 "assigned_rate_limits": { 00:13:00.096 "rw_ios_per_sec": 0, 00:13:00.096 "rw_mbytes_per_sec": 0, 00:13:00.096 "r_mbytes_per_sec": 0, 00:13:00.096 "w_mbytes_per_sec": 0 00:13:00.096 }, 00:13:00.096 "claimed": false, 00:13:00.096 "zoned": false, 00:13:00.096 "supported_io_types": { 00:13:00.096 "read": true, 00:13:00.096 "write": true, 00:13:00.096 "unmap": true, 00:13:00.096 "write_zeroes": true, 00:13:00.096 "flush": true, 00:13:00.096 "reset": true, 00:13:00.096 "compare": true, 00:13:00.096 "compare_and_write": false, 00:13:00.096 "abort": true, 00:13:00.096 "nvme_admin": true, 00:13:00.096 "nvme_io": true 00:13:00.096 }, 00:13:00.096 "driver_specific": { 00:13:00.096 "nvme": [ 00:13:00.096 { 00:13:00.096 "pci_address": "0000:00:11.0", 00:13:00.096 "trid": { 00:13:00.096 "trtype": "PCIe", 00:13:00.096 "traddr": "0000:00:11.0" 00:13:00.096 }, 00:13:00.096 "ctrlr_data": { 00:13:00.096 "cntlid": 0, 00:13:00.096 "vendor_id": "0x1b36", 00:13:00.096 "model_number": "QEMU NVMe Ctrl", 00:13:00.096 "serial_number": "12341", 00:13:00.096 "firmware_revision": "8.0.0", 00:13:00.096 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:00.096 "oacs": { 00:13:00.096 "security": 0, 00:13:00.096 "format": 1, 00:13:00.096 "firmware": 0, 00:13:00.096 "ns_manage": 1 00:13:00.096 }, 00:13:00.096 "multi_ctrlr": false, 00:13:00.096 "ana_reporting": false 00:13:00.096 }, 00:13:00.096 "vs": { 00:13:00.096 "nvme_version": "1.4" 00:13:00.096 }, 00:13:00.096 "ns_data": { 00:13:00.096 "id": 1, 00:13:00.096 "can_share": false 00:13:00.096 } 00:13:00.096 } 00:13:00.096 ], 00:13:00.096 "mp_policy": "active_passive" 00:13:00.096 } 00:13:00.096 } 00:13:00.096 ] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:00.096 01:22:45 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:00.096 01:22:45 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:06.672 01:22:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:06.672 [2024-07-21 01:22:51.393879] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:06.672 [2024-07-21 01:22:51.395941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.395994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.396014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.396046] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.396058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.396074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.396087] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.396108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.396120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.396135] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.396147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.396162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.793212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:06.672 [2024-07-21 01:22:51.795326] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.795367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.795403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.795422] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.795437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.795449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.795464] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.795475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.795490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.672 [2024-07-21 01:22:51.795501] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.672 [2024-07-21 01:22:51.795519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.672 [2024-07-21 01:22:51.795531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.13 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.13 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.13 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.13 2 00:13:13.228 remove_attach_helper took 12.13s to complete (handling 2 nvme drive(s)) 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:13:13.228 01:22:57 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:13:13.228 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:13:18.490 01:23:03 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:18.490 01:23:03 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:13:18.490 01:23:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:13:25.145 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.145 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.145 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:13:25.145 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.07 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:13:25.145 01:23:09 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.07 00:13:25.145 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.07 00:13:25.146 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.07 2 00:13:25.146 remove_attach_helper took 12.07s to complete (handling 2 nvme drive(s)) 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:13:25.146 01:23:09 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 84461 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 84461 ']' 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 84461 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84461 00:13:25.146 killing process with pid 84461 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84461' 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@965 -- # kill 84461 00:13:25.146 01:23:09 sw_hotplug -- common/autotest_common.sh@970 -- # wait 84461 00:13:25.146 ************************************ 00:13:25.146 END TEST sw_hotplug 00:13:25.146 ************************************ 00:13:25.146 00:13:25.146 real 1m16.390s 00:13:25.146 user 0m44.188s 00:13:25.146 sys 0m15.335s 00:13:25.146 01:23:10 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.146 01:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.146 01:23:10 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:25.146 01:23:10 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.146 01:23:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:25.146 01:23:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.146 01:23:10 -- common/autotest_common.sh@10 -- # set +x 00:13:25.146 ************************************ 00:13:25.146 START TEST nvme_xnvme 00:13:25.146 ************************************ 00:13:25.146 01:23:10 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.146 * Looking for test storage... 00:13:25.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.146 01:23:10 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.146 01:23:10 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.146 01:23:10 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.146 01:23:10 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.146 01:23:10 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.146 01:23:10 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.146 01:23:10 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.146 01:23:10 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:25.146 01:23:10 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.146 01:23:10 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:25.146 01:23:10 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:25.146 01:23:10 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.146 01:23:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.404 ************************************ 00:13:25.404 START TEST xnvme_to_malloc_dd_copy 00:13:25.404 ************************************ 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:25.404 01:23:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.404 { 00:13:25.404 "subsystems": [ 00:13:25.404 { 00:13:25.404 "subsystem": "bdev", 00:13:25.404 "config": [ 00:13:25.404 { 00:13:25.404 "params": { 00:13:25.404 "block_size": 512, 00:13:25.404 "num_blocks": 2097152, 00:13:25.404 "name": "malloc0" 00:13:25.404 }, 00:13:25.404 "method": "bdev_malloc_create" 00:13:25.404 }, 00:13:25.404 { 00:13:25.404 "params": { 00:13:25.404 "io_mechanism": "libaio", 00:13:25.404 "filename": "/dev/nullb0", 00:13:25.405 "name": "null0" 00:13:25.405 }, 00:13:25.405 "method": "bdev_xnvme_create" 00:13:25.405 }, 00:13:25.405 { 00:13:25.405 "method": "bdev_wait_for_examine" 00:13:25.405 } 00:13:25.405 ] 00:13:25.405 } 00:13:25.405 ] 00:13:25.405 } 00:13:25.405 [2024-07-21 01:23:10.591215] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:25.405 [2024-07-21 01:23:10.591436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84817 ] 00:13:25.663 [2024-07-21 01:23:10.757766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.663 [2024-07-21 01:23:10.821144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.948  Copying: 270/1024 [MB] (270 MBps) Copying: 542/1024 [MB] (272 MBps) Copying: 813/1024 [MB] (271 MBps) Copying: 1024/1024 [MB] (average 271 MBps) 00:13:30.948 00:13:30.948 01:23:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:30.948 01:23:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:30.948 01:23:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:30.948 01:23:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:30.949 { 00:13:30.949 "subsystems": [ 00:13:30.949 { 00:13:30.949 "subsystem": "bdev", 00:13:30.949 "config": [ 00:13:30.949 { 00:13:30.949 "params": { 00:13:30.949 "block_size": 512, 00:13:30.949 "num_blocks": 2097152, 00:13:30.949 "name": "malloc0" 00:13:30.949 }, 00:13:30.949 "method": "bdev_malloc_create" 00:13:30.949 }, 00:13:30.949 { 00:13:30.949 "params": { 00:13:30.949 "io_mechanism": "libaio", 00:13:30.949 "filename": "/dev/nullb0", 00:13:30.949 "name": "null0" 00:13:30.949 }, 00:13:30.949 "method": "bdev_xnvme_create" 00:13:30.949 }, 00:13:30.949 { 00:13:30.949 "method": "bdev_wait_for_examine" 00:13:30.949 } 00:13:30.949 ] 00:13:30.949 } 00:13:30.949 ] 00:13:30.949 } 00:13:30.949 [2024-07-21 01:23:16.113616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:30.949 [2024-07-21 01:23:16.113729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84882 ] 00:13:31.207 [2024-07-21 01:23:16.280634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.207 [2024-07-21 01:23:16.341669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.413  Copying: 275/1024 [MB] (275 MBps) Copying: 553/1024 [MB] (278 MBps) Copying: 830/1024 [MB] (277 MBps) Copying: 1024/1024 [MB] (average 276 MBps) 00:13:36.413 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:36.413 01:23:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:36.413 { 00:13:36.413 "subsystems": [ 00:13:36.413 { 00:13:36.413 "subsystem": "bdev", 00:13:36.413 "config": [ 00:13:36.413 { 00:13:36.413 "params": { 00:13:36.413 "block_size": 512, 00:13:36.413 "num_blocks": 2097152, 00:13:36.413 "name": "malloc0" 00:13:36.413 }, 00:13:36.413 "method": "bdev_malloc_create" 00:13:36.413 }, 00:13:36.413 { 00:13:36.413 "params": { 00:13:36.413 "io_mechanism": "io_uring", 00:13:36.413 "filename": "/dev/nullb0", 00:13:36.413 "name": "null0" 00:13:36.413 }, 00:13:36.413 "method": "bdev_xnvme_create" 00:13:36.413 }, 00:13:36.413 { 00:13:36.413 "method": "bdev_wait_for_examine" 00:13:36.413 } 00:13:36.413 ] 00:13:36.413 } 00:13:36.413 ] 00:13:36.413 } 00:13:36.413 [2024-07-21 01:23:21.586703] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:36.413 [2024-07-21 01:23:21.587085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84947 ] 00:13:36.673 [2024-07-21 01:23:21.757541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.673 [2024-07-21 01:23:21.820312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.550  Copying: 284/1024 [MB] (284 MBps) Copying: 568/1024 [MB] (284 MBps) Copying: 854/1024 [MB] (286 MBps) Copying: 1024/1024 [MB] (average 284 MBps) 00:13:41.550 00:13:41.550 01:23:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:41.550 01:23:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:41.550 01:23:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:41.550 01:23:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:41.550 { 00:13:41.550 "subsystems": [ 00:13:41.550 { 00:13:41.550 "subsystem": "bdev", 00:13:41.550 "config": [ 00:13:41.550 { 00:13:41.550 "params": { 00:13:41.550 "block_size": 512, 00:13:41.550 "num_blocks": 2097152, 00:13:41.550 "name": "malloc0" 00:13:41.550 }, 00:13:41.550 "method": "bdev_malloc_create" 00:13:41.550 }, 00:13:41.550 { 00:13:41.550 "params": { 00:13:41.550 "io_mechanism": "io_uring", 00:13:41.550 "filename": "/dev/nullb0", 00:13:41.550 "name": "null0" 00:13:41.550 }, 00:13:41.550 "method": "bdev_xnvme_create" 00:13:41.550 }, 00:13:41.550 { 00:13:41.550 "method": "bdev_wait_for_examine" 00:13:41.550 } 00:13:41.550 ] 00:13:41.550 } 00:13:41.550 ] 00:13:41.550 } 00:13:41.808 [2024-07-21 01:23:26.870961] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:41.808 [2024-07-21 01:23:26.871083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85017 ] 00:13:41.808 [2024-07-21 01:23:27.041412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.808 [2024-07-21 01:23:27.103687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.784  Copying: 288/1024 [MB] (288 MBps) Copying: 578/1024 [MB] (290 MBps) Copying: 866/1024 [MB] (288 MBps) Copying: 1024/1024 [MB] (average 289 MBps) 00:13:46.784 00:13:46.784 01:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:46.784 01:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:46.784 ************************************ 00:13:46.784 END TEST xnvme_to_malloc_dd_copy 00:13:46.784 ************************************ 00:13:46.784 00:13:46.784 real 0m21.608s 00:13:46.784 user 0m16.433s 00:13:46.785 sys 0m4.725s 00:13:46.785 01:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.785 01:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:47.043 01:23:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:47.043 01:23:32 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:47.043 01:23:32 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.043 01:23:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.043 ************************************ 00:13:47.043 START TEST xnvme_bdevperf 00:13:47.043 ************************************ 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.043 01:23:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.043 { 00:13:47.043 "subsystems": [ 00:13:47.043 { 00:13:47.043 "subsystem": "bdev", 00:13:47.043 "config": [ 00:13:47.043 { 00:13:47.043 "params": { 00:13:47.043 "io_mechanism": "libaio", 00:13:47.043 "filename": "/dev/nullb0", 00:13:47.043 "name": "null0" 00:13:47.043 }, 00:13:47.043 "method": "bdev_xnvme_create" 00:13:47.043 }, 00:13:47.043 { 00:13:47.043 "method": "bdev_wait_for_examine" 00:13:47.043 } 00:13:47.043 ] 00:13:47.043 } 00:13:47.043 ] 00:13:47.043 } 00:13:47.043 [2024-07-21 01:23:32.271911] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:47.043 [2024-07-21 01:23:32.272033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85100 ] 00:13:47.302 [2024-07-21 01:23:32.439990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.302 [2024-07-21 01:23:32.504161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.561 Running I/O for 5 seconds... 00:13:52.840 00:13:52.840 Latency(us) 00:13:52.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.840 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:52.840 null0 : 5.00 168970.15 660.04 0.00 0.00 376.45 111.86 947.51 00:13:52.840 =================================================================================================================== 00:13:52.840 Total : 168970.15 660.04 0.00 0.00 376.45 111.86 947.51 00:13:52.840 01:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:52.840 01:23:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:52.840 01:23:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:52.840 01:23:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:52.840 01:23:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:52.840 01:23:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:52.840 { 00:13:52.840 "subsystems": [ 00:13:52.840 { 00:13:52.840 "subsystem": "bdev", 00:13:52.840 "config": [ 00:13:52.840 { 00:13:52.840 "params": { 00:13:52.840 "io_mechanism": "io_uring", 00:13:52.840 "filename": "/dev/nullb0", 00:13:52.840 "name": "null0" 00:13:52.840 }, 00:13:52.840 "method": "bdev_xnvme_create" 00:13:52.840 }, 00:13:52.840 { 00:13:52.840 "method": "bdev_wait_for_examine" 00:13:52.840 } 00:13:52.840 ] 00:13:52.840 } 00:13:52.840 ] 00:13:52.840 } 00:13:52.840 [2024-07-21 01:23:38.089136] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:52.840 [2024-07-21 01:23:38.089584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85168 ] 00:13:53.099 [2024-07-21 01:23:38.258147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.099 [2024-07-21 01:23:38.318671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.358 Running I/O for 5 seconds... 00:13:58.790 00:13:58.790 Latency(us) 00:13:58.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.790 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:58.790 null0 : 5.00 216938.26 847.42 0.00 0.00 292.86 176.01 383.28 00:13:58.790 =================================================================================================================== 00:13:58.790 Total : 216938.26 847.42 0.00 0.00 292.86 176.01 383.28 00:13:58.791 01:23:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:58.791 01:23:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:58.791 00:13:58.791 real 0m11.660s 00:13:58.791 user 0m8.153s 00:13:58.791 sys 0m3.297s 00:13:58.791 01:23:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.791 ************************************ 00:13:58.791 END TEST xnvme_bdevperf 00:13:58.791 ************************************ 00:13:58.791 01:23:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.791 ************************************ 00:13:58.791 END TEST nvme_xnvme 00:13:58.791 ************************************ 00:13:58.791 00:13:58.791 real 0m33.564s 00:13:58.791 user 0m24.700s 00:13:58.791 sys 0m8.201s 00:13:58.791 01:23:43 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.791 01:23:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.791 01:23:43 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:58.791 01:23:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:58.791 01:23:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.791 01:23:43 -- common/autotest_common.sh@10 -- # set +x 00:13:58.791 ************************************ 00:13:58.791 START TEST blockdev_xnvme 00:13:58.791 ************************************ 00:13:58.791 01:23:43 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:58.791 * Looking for test storage... 00:13:58.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:58.791 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:59.049 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85297 00:13:59.049 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:59.049 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:59.049 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85297 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 85297 ']' 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.049 01:23:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.049 [2024-07-21 01:23:44.198054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:59.049 [2024-07-21 01:23:44.198377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85297 ] 00:13:59.308 [2024-07-21 01:23:44.365917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.308 [2024-07-21 01:23:44.430119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.877 01:23:44 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.877 01:23:44 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:13:59.877 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:59.877 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:59.877 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:59.877 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:59.877 01:23:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:00.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:00.395 Waiting for block devices as requested 00:14:00.395 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:00.395 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:05.664 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:05.664 01:23:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.664 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:05.665 nvme0n1 00:14:05.665 nvme0n2 00:14:05.665 nvme0n3 00:14:05.665 nvme1n1 00:14:05.665 nvme2n1 00:14:05.665 nvme3n1 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.665 01:23:50 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.665 01:23:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6d423c90-33c9-4b2e-b77a-308e1ecf2092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d423c90-33c9-4b2e-b77a-308e1ecf2092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "0d180f06-e83b-4afd-a7d0-a01561f62ab9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d180f06-e83b-4afd-a7d0-a01561f62ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "65e6253d-b6c0-4202-8679-6a7cde548c0a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65e6253d-b6c0-4202-8679-6a7cde548c0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cfcc5fe8-ec81-4f25-b84d-6a250d0f2f36"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfcc5fe8-ec81-4f25-b84d-6a250d0f2f36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ba727b0e-2269-4c44-8c97-1b2f49521902"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ba727b0e-2269-4c44-8c97-1b2f49521902",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df3f8922-acce-4700-a9dc-3dd4e96c239b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df3f8922-acce-4700-a9dc-3dd4e96c239b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:05.925 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 85297 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 85297 ']' 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 85297 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85297 00:14:05.925 killing process with pid 85297 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85297' 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 85297 00:14:05.925 01:23:51 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 85297 00:14:06.494 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:06.494 01:23:51 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:06.494 01:23:51 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:06.494 01:23:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.494 01:23:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.494 ************************************ 00:14:06.494 START TEST bdev_hello_world 00:14:06.494 ************************************ 00:14:06.494 01:23:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:06.753 [2024-07-21 01:23:51.852060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:06.753 [2024-07-21 01:23:51.852171] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85568 ] 00:14:06.753 [2024-07-21 01:23:52.022114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.012 [2024-07-21 01:23:52.085101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.012 [2024-07-21 01:23:52.307240] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:07.012 [2024-07-21 01:23:52.307288] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:07.012 [2024-07-21 01:23:52.307307] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:07.012 [2024-07-21 01:23:52.309619] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:07.012 [2024-07-21 01:23:52.310011] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:07.012 [2024-07-21 01:23:52.310034] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:07.012 [2024-07-21 01:23:52.310545] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:07.012 00:14:07.012 [2024-07-21 01:23:52.310578] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:07.582 00:14:07.582 real 0m0.870s 00:14:07.582 user 0m0.463s 00:14:07.582 sys 0m0.297s 00:14:07.582 01:23:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.582 01:23:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:07.582 ************************************ 00:14:07.582 END TEST bdev_hello_world 00:14:07.582 ************************************ 00:14:07.582 01:23:52 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:14:07.582 01:23:52 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:07.582 01:23:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.582 01:23:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.582 ************************************ 00:14:07.582 START TEST bdev_bounds 00:14:07.582 ************************************ 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:14:07.582 Process bdevio pid: 85599 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=85599 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 85599' 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 85599 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 85599 ']' 00:14:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:07.582 01:23:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:07.582 [2024-07-21 01:23:52.803635] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:07.582 [2024-07-21 01:23:52.803750] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85599 ] 00:14:07.852 [2024-07-21 01:23:52.977138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.852 [2024-07-21 01:23:53.042421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.852 [2024-07-21 01:23:53.042566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.852 [2024-07-21 01:23:53.042650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.434 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:08.434 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:14:08.434 01:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:08.434 I/O targets: 00:14:08.434 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:08.434 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:08.434 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:08.434 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:08.434 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:08.434 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:08.434 00:14:08.434 00:14:08.434 CUnit - A unit testing framework for C - Version 2.1-3 00:14:08.434 http://cunit.sourceforge.net/ 00:14:08.434 00:14:08.434 00:14:08.434 Suite: bdevio tests on: nvme3n1 00:14:08.434 Test: blockdev write read block ...passed 00:14:08.434 Test: blockdev write zeroes read block ...passed 00:14:08.434 Test: blockdev write zeroes read no split ...passed 00:14:08.434 Test: blockdev write zeroes read split ...passed 00:14:08.434 Test: blockdev write zeroes read split partial ...passed 00:14:08.434 Test: blockdev reset ...passed 00:14:08.434 Test: blockdev write read 8 blocks ...passed 00:14:08.434 Test: blockdev write read size > 128k ...passed 00:14:08.434 Test: blockdev write read invalid size ...passed 00:14:08.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.434 Test: blockdev write read max offset ...passed 00:14:08.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.434 Test: blockdev writev readv 8 blocks ...passed 00:14:08.434 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.434 Test: blockdev writev readv block ...passed 00:14:08.434 Test: blockdev writev readv size > 128k ...passed 00:14:08.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.434 Test: blockdev comparev and writev ...passed 00:14:08.434 Test: blockdev nvme passthru rw ...passed 00:14:08.434 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.434 Test: blockdev nvme admin passthru ...passed 00:14:08.434 Test: blockdev copy ...passed 00:14:08.434 Suite: bdevio tests on: nvme2n1 00:14:08.434 Test: blockdev write read block ...passed 00:14:08.434 Test: blockdev write zeroes read block ...passed 00:14:08.434 Test: blockdev write zeroes read no split ...passed 00:14:08.434 Test: blockdev write zeroes read split ...passed 00:14:08.434 Test: blockdev write zeroes read split partial ...passed 00:14:08.434 Test: blockdev reset ...passed 00:14:08.434 Test: blockdev write read 8 blocks ...passed 00:14:08.434 Test: blockdev write read size > 128k ...passed 00:14:08.434 Test: blockdev write read invalid size ...passed 00:14:08.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.434 Test: blockdev write read max offset ...passed 00:14:08.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.434 Test: blockdev writev readv 8 blocks ...passed 00:14:08.434 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.434 Test: blockdev writev readv block ...passed 00:14:08.434 Test: blockdev writev readv size > 128k ...passed 00:14:08.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.434 Test: blockdev comparev and writev ...passed 00:14:08.434 Test: blockdev nvme passthru rw ...passed 00:14:08.434 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.434 Test: blockdev nvme admin passthru ...passed 00:14:08.434 Test: blockdev copy ...passed 00:14:08.434 Suite: bdevio tests on: nvme1n1 00:14:08.434 Test: blockdev write read block ...passed 00:14:08.434 Test: blockdev write zeroes read block ...passed 00:14:08.434 Test: blockdev write zeroes read no split ...passed 00:14:08.434 Test: blockdev write zeroes read split ...passed 00:14:08.434 Test: blockdev write zeroes read split partial ...passed 00:14:08.434 Test: blockdev reset ...passed 00:14:08.434 Test: blockdev write read 8 blocks ...passed 00:14:08.693 Test: blockdev write read size > 128k ...passed 00:14:08.693 Test: blockdev write read invalid size ...passed 00:14:08.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.694 Test: blockdev write read max offset ...passed 00:14:08.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.694 Test: blockdev writev readv 8 blocks ...passed 00:14:08.694 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.694 Test: blockdev writev readv block ...passed 00:14:08.694 Test: blockdev writev readv size > 128k ...passed 00:14:08.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.694 Test: blockdev comparev and writev ...passed 00:14:08.694 Test: blockdev nvme passthru rw ...passed 00:14:08.694 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.694 Test: blockdev nvme admin passthru ...passed 00:14:08.694 Test: blockdev copy ...passed 00:14:08.694 Suite: bdevio tests on: nvme0n3 00:14:08.694 Test: blockdev write read block ...passed 00:14:08.694 Test: blockdev write zeroes read block ...passed 00:14:08.694 Test: blockdev write zeroes read no split ...passed 00:14:08.694 Test: blockdev write zeroes read split ...passed 00:14:08.694 Test: blockdev write zeroes read split partial ...passed 00:14:08.694 Test: blockdev reset ...passed 00:14:08.694 Test: blockdev write read 8 blocks ...passed 00:14:08.694 Test: blockdev write read size > 128k ...passed 00:14:08.694 Test: blockdev write read invalid size ...passed 00:14:08.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.694 Test: blockdev write read max offset ...passed 00:14:08.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.694 Test: blockdev writev readv 8 blocks ...passed 00:14:08.694 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.694 Test: blockdev writev readv block ...passed 00:14:08.694 Test: blockdev writev readv size > 128k ...passed 00:14:08.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.694 Test: blockdev comparev and writev ...passed 00:14:08.694 Test: blockdev nvme passthru rw ...passed 00:14:08.694 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.694 Test: blockdev nvme admin passthru ...passed 00:14:08.694 Test: blockdev copy ...passed 00:14:08.694 Suite: bdevio tests on: nvme0n2 00:14:08.694 Test: blockdev write read block ...passed 00:14:08.694 Test: blockdev write zeroes read block ...passed 00:14:08.694 Test: blockdev write zeroes read no split ...passed 00:14:08.694 Test: blockdev write zeroes read split ...passed 00:14:08.694 Test: blockdev write zeroes read split partial ...passed 00:14:08.694 Test: blockdev reset ...passed 00:14:08.694 Test: blockdev write read 8 blocks ...passed 00:14:08.694 Test: blockdev write read size > 128k ...passed 00:14:08.694 Test: blockdev write read invalid size ...passed 00:14:08.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.694 Test: blockdev write read max offset ...passed 00:14:08.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.694 Test: blockdev writev readv 8 blocks ...passed 00:14:08.694 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.694 Test: blockdev writev readv block ...passed 00:14:08.694 Test: blockdev writev readv size > 128k ...passed 00:14:08.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.694 Test: blockdev comparev and writev ...passed 00:14:08.694 Test: blockdev nvme passthru rw ...passed 00:14:08.694 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.694 Test: blockdev nvme admin passthru ...passed 00:14:08.694 Test: blockdev copy ...passed 00:14:08.694 Suite: bdevio tests on: nvme0n1 00:14:08.694 Test: blockdev write read block ...passed 00:14:08.694 Test: blockdev write zeroes read block ...passed 00:14:08.694 Test: blockdev write zeroes read no split ...passed 00:14:08.694 Test: blockdev write zeroes read split ...passed 00:14:08.694 Test: blockdev write zeroes read split partial ...passed 00:14:08.694 Test: blockdev reset ...passed 00:14:08.694 Test: blockdev write read 8 blocks ...passed 00:14:08.694 Test: blockdev write read size > 128k ...passed 00:14:08.694 Test: blockdev write read invalid size ...passed 00:14:08.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:08.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:08.694 Test: blockdev write read max offset ...passed 00:14:08.694 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:08.694 Test: blockdev writev readv 8 blocks ...passed 00:14:08.694 Test: blockdev writev readv 30 x 1block ...passed 00:14:08.694 Test: blockdev writev readv block ...passed 00:14:08.694 Test: blockdev writev readv size > 128k ...passed 00:14:08.694 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:08.694 Test: blockdev comparev and writev ...passed 00:14:08.694 Test: blockdev nvme passthru rw ...passed 00:14:08.694 Test: blockdev nvme passthru vendor specific ...passed 00:14:08.694 Test: blockdev nvme admin passthru ...passed 00:14:08.694 Test: blockdev copy ...passed 00:14:08.694 00:14:08.694 Run Summary: Type Total Ran Passed Failed Inactive 00:14:08.694 suites 6 6 n/a 0 0 00:14:08.694 tests 138 138 138 0 0 00:14:08.694 asserts 780 780 780 0 n/a 00:14:08.694 00:14:08.694 Elapsed time = 0.429 seconds 00:14:08.694 0 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 85599 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 85599 ']' 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 85599 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85599 00:14:08.694 killing process with pid 85599 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85599' 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 85599 00:14:08.694 01:23:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 85599 00:14:08.953 01:23:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:14:08.953 00:14:08.953 real 0m1.524s 00:14:08.953 user 0m3.377s 00:14:08.953 sys 0m0.432s 00:14:08.953 01:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.953 01:23:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:08.953 ************************************ 00:14:08.953 END TEST bdev_bounds 00:14:08.953 ************************************ 00:14:09.211 01:23:54 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:09.211 01:23:54 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:09.211 01:23:54 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.211 01:23:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.211 ************************************ 00:14:09.211 START TEST bdev_nbd 00:14:09.211 ************************************ 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=85643 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 85643 /var/tmp/spdk-nbd.sock 00:14:09.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85643 ']' 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.211 01:23:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:09.211 [2024-07-21 01:23:54.412138] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:09.211 [2024-07-21 01:23:54.412258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.470 [2024-07-21 01:23:54.586418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.470 [2024-07-21 01:23:54.649121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.037 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.296 1+0 records in 00:14:10.296 1+0 records out 00:14:10.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876629 s, 4.7 MB/s 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.296 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.555 1+0 records in 00:14:10.555 1+0 records out 00:14:10.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059835 s, 6.8 MB/s 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:10.555 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.814 1+0 records in 00:14:10.814 1+0 records out 00:14:10.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809244 s, 5.1 MB/s 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.814 01:23:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.814 1+0 records in 00:14:10.814 1+0 records out 00:14:10.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723936 s, 5.7 MB/s 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:10.814 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.073 1+0 records in 00:14:11.073 1+0 records out 00:14:11.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736374 s, 5.6 MB/s 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:11.073 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.331 1+0 records in 00:14:11.331 1+0 records out 00:14:11.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122032 s, 3.4 MB/s 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:11.331 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd0", 00:14:11.590 "bdev_name": "nvme0n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd1", 00:14:11.590 "bdev_name": "nvme0n2" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd2", 00:14:11.590 "bdev_name": "nvme0n3" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd3", 00:14:11.590 "bdev_name": "nvme1n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd4", 00:14:11.590 "bdev_name": "nvme2n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd5", 00:14:11.590 "bdev_name": "nvme3n1" 00:14:11.590 } 00:14:11.590 ]' 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd0", 00:14:11.590 "bdev_name": "nvme0n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd1", 00:14:11.590 "bdev_name": "nvme0n2" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd2", 00:14:11.590 "bdev_name": "nvme0n3" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd3", 00:14:11.590 "bdev_name": "nvme1n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd4", 00:14:11.590 "bdev_name": "nvme2n1" 00:14:11.590 }, 00:14:11.590 { 00:14:11.590 "nbd_device": "/dev/nbd5", 00:14:11.590 "bdev_name": "nvme3n1" 00:14:11.590 } 00:14:11.590 ]' 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.590 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.848 01:23:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.107 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.365 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.366 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.632 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:12.891 01:23:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:12.891 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:13.149 /dev/nbd0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:13.149 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.150 1+0 records in 00:14:13.150 1+0 records out 00:14:13.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055682 s, 7.4 MB/s 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.150 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:14:13.408 /dev/nbd1 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.408 1+0 records in 00:14:13.408 1+0 records out 00:14:13.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631444 s, 6.5 MB/s 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.408 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:14:13.666 /dev/nbd10 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.666 1+0 records in 00:14:13.666 1+0 records out 00:14:13.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639912 s, 6.4 MB/s 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.666 01:23:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:14:13.924 /dev/nbd11 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.924 1+0 records in 00:14:13.924 1+0 records out 00:14:13.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079827 s, 5.1 MB/s 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:13.924 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:14:14.183 /dev/nbd12 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.183 1+0 records in 00:14:14.183 1+0 records out 00:14:14.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117835 s, 3.5 MB/s 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.183 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:14.442 /dev/nbd13 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.442 1+0 records in 00:14:14.442 1+0 records out 00:14:14.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529294 s, 7.7 MB/s 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.442 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:14.702 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd0", 00:14:14.703 "bdev_name": "nvme0n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd1", 00:14:14.703 "bdev_name": "nvme0n2" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd10", 00:14:14.703 "bdev_name": "nvme0n3" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd11", 00:14:14.703 "bdev_name": "nvme1n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd12", 00:14:14.703 "bdev_name": "nvme2n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd13", 00:14:14.703 "bdev_name": "nvme3n1" 00:14:14.703 } 00:14:14.703 ]' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd0", 00:14:14.703 "bdev_name": "nvme0n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd1", 00:14:14.703 "bdev_name": "nvme0n2" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd10", 00:14:14.703 "bdev_name": "nvme0n3" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd11", 00:14:14.703 "bdev_name": "nvme1n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd12", 00:14:14.703 "bdev_name": "nvme2n1" 00:14:14.703 }, 00:14:14.703 { 00:14:14.703 "nbd_device": "/dev/nbd13", 00:14:14.703 "bdev_name": "nvme3n1" 00:14:14.703 } 00:14:14.703 ]' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:14.703 /dev/nbd1 00:14:14.703 /dev/nbd10 00:14:14.703 /dev/nbd11 00:14:14.703 /dev/nbd12 00:14:14.703 /dev/nbd13' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:14.703 /dev/nbd1 00:14:14.703 /dev/nbd10 00:14:14.703 /dev/nbd11 00:14:14.703 /dev/nbd12 00:14:14.703 /dev/nbd13' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:14.703 256+0 records in 00:14:14.703 256+0 records out 00:14:14.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116195 s, 90.2 MB/s 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:14.703 256+0 records in 00:14:14.703 256+0 records out 00:14:14.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116581 s, 9.0 MB/s 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.703 01:23:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:14.962 256+0 records in 00:14:14.962 256+0 records out 00:14:14.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131012 s, 8.0 MB/s 00:14:14.962 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.962 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:14.962 256+0 records in 00:14:14.962 256+0 records out 00:14:14.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118502 s, 8.8 MB/s 00:14:14.962 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:14.962 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:15.220 256+0 records in 00:14:15.220 256+0 records out 00:14:15.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120276 s, 8.7 MB/s 00:14:15.220 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.220 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:15.220 256+0 records in 00:14:15.220 256+0 records out 00:14:15.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148954 s, 7.0 MB/s 00:14:15.220 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:15.220 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:15.479 256+0 records in 00:14:15.479 256+0 records out 00:14:15.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122317 s, 8.6 MB/s 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.479 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:15.480 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.480 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.739 01:24:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.997 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.255 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.513 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:16.772 01:24:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:16.772 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:16.772 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:16.772 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:17.031 malloc_lvol_verify 00:14:17.031 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:17.290 7a2c2a60-1ad0-42eb-b50e-7de63fae4de6 00:14:17.290 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:17.549 364d0508-6e42-4537-9c33-0f394570fc63 00:14:17.549 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:17.807 /dev/nbd0 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:17.807 mke2fs 1.46.5 (30-Dec-2021) 00:14:17.807 Discarding device blocks: 0/4096 done 00:14:17.807 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:17.807 00:14:17.807 Allocating group tables: 0/1 done 00:14:17.807 Writing inode tables: 0/1 done 00:14:17.807 Creating journal (1024 blocks): done 00:14:17.807 Writing superblocks and filesystem accounting information: 0/1 done 00:14:17.807 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.807 01:24:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 85643 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85643 ']' 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85643 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:17.807 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85643 00:14:18.064 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:18.064 killing process with pid 85643 00:14:18.064 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:18.064 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85643' 00:14:18.065 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85643 00:14:18.065 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85643 00:14:18.323 01:24:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:18.323 00:14:18.323 real 0m9.182s 00:14:18.323 user 0m11.747s 00:14:18.323 sys 0m4.352s 00:14:18.323 ************************************ 00:14:18.323 END TEST bdev_nbd 00:14:18.323 ************************************ 00:14:18.323 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.323 01:24:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:18.323 01:24:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:18.323 01:24:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:14:18.323 01:24:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:14:18.323 01:24:03 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:18.323 01:24:03 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:18.323 01:24:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.323 01:24:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.323 ************************************ 00:14:18.323 START TEST bdev_fio 00:14:18.323 ************************************ 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:14:18.323 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:14:18.323 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.581 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:18.582 ************************************ 00:14:18.582 START TEST bdev_fio_rw_verify 00:14:18.582 ************************************ 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.582 01:24:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:18.841 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:18.841 fio-3.35 00:14:18.841 Starting 6 threads 00:14:31.118 00:14:31.118 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86032: Sun Jul 21 01:24:14 2024 00:14:31.118 read: IOPS=32.7k, BW=128MiB/s (134MB/s)(1276MiB/10001msec) 00:14:31.118 slat (usec): min=2, max=551, avg= 7.72, stdev= 6.09 00:14:31.118 clat (usec): min=98, max=10877, avg=551.99, stdev=266.49 00:14:31.118 lat (usec): min=104, max=10891, avg=559.71, stdev=267.45 00:14:31.118 clat percentiles (usec): 00:14:31.118 | 50.000th=[ 537], 99.000th=[ 1270], 99.900th=[ 2376], 99.990th=[ 4293], 00:14:31.118 | 99.999th=[10814] 00:14:31.118 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(1288MiB/10001msec); 0 zone resets 00:14:31.118 slat (usec): min=8, max=3718, avg=26.01, stdev=38.24 00:14:31.118 clat (usec): min=82, max=4931, avg=660.07, stdev=278.13 00:14:31.118 lat (usec): min=105, max=4966, avg=686.08, stdev=282.95 00:14:31.118 clat percentiles (usec): 00:14:31.118 | 50.000th=[ 635], 99.000th=[ 1500], 99.900th=[ 2147], 99.990th=[ 3720], 00:14:31.118 | 99.999th=[ 4817] 00:14:31.118 bw ( KiB/s): min=103752, max=155408, per=100.00%, avg=132099.37, stdev=2360.07, samples=114 00:14:31.118 iops : min=25938, max=38852, avg=33024.53, stdev=590.00, samples=114 00:14:31.118 lat (usec) : 100=0.01%, 250=7.51%, 500=29.42%, 750=36.84%, 1000=19.27% 00:14:31.118 lat (msec) : 2=6.81%, 4=0.14%, 10=0.01%, 20=0.01% 00:14:31.118 cpu : usr=54.45%, sys=29.89%, ctx=8265, majf=0, minf=27239 00:14:31.118 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.118 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.118 issued rwts: total=326689,329617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:31.118 00:14:31.118 Run status group 0 (all jobs): 00:14:31.118 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1276MiB (1338MB), run=10001-10001msec 00:14:31.118 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1288MiB (1350MB), run=10001-10001msec 00:14:31.118 ----------------------------------------------------- 00:14:31.118 Suppressions used: 00:14:31.118 count bytes template 00:14:31.118 6 48 /usr/src/fio/parse.c 00:14:31.118 2702 259392 /usr/src/fio/iolog.c 00:14:31.118 1 8 libtcmalloc_minimal.so 00:14:31.118 1 904 libcrypto.so 00:14:31.118 ----------------------------------------------------- 00:14:31.118 00:14:31.118 00:14:31.118 real 0m11.355s 00:14:31.118 user 0m33.501s 00:14:31.118 sys 0m18.403s 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.118 ************************************ 00:14:31.118 END TEST bdev_fio_rw_verify 00:14:31.118 ************************************ 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "6d423c90-33c9-4b2e-b77a-308e1ecf2092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d423c90-33c9-4b2e-b77a-308e1ecf2092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "0d180f06-e83b-4afd-a7d0-a01561f62ab9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0d180f06-e83b-4afd-a7d0-a01561f62ab9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "65e6253d-b6c0-4202-8679-6a7cde548c0a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65e6253d-b6c0-4202-8679-6a7cde548c0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cfcc5fe8-ec81-4f25-b84d-6a250d0f2f36"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cfcc5fe8-ec81-4f25-b84d-6a250d0f2f36",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ba727b0e-2269-4c44-8c97-1b2f49521902"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ba727b0e-2269-4c44-8c97-1b2f49521902",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df3f8922-acce-4700-a9dc-3dd4e96c239b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "df3f8922-acce-4700-a9dc-3dd4e96c239b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.118 /home/vagrant/spdk_repo/spdk 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:14:31.118 00:14:31.118 real 0m11.586s 00:14:31.118 user 0m33.605s 00:14:31.118 sys 0m18.531s 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.118 01:24:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:31.118 ************************************ 00:14:31.118 END TEST bdev_fio 00:14:31.118 ************************************ 00:14:31.118 01:24:15 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:31.118 01:24:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:31.119 01:24:15 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:31.119 01:24:15 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.119 01:24:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.119 ************************************ 00:14:31.119 START TEST bdev_verify 00:14:31.119 ************************************ 00:14:31.119 01:24:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:31.119 [2024-07-21 01:24:15.320324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:31.119 [2024-07-21 01:24:15.320470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:14:31.119 [2024-07-21 01:24:15.493342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:31.119 [2024-07-21 01:24:15.559003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.119 [2024-07-21 01:24:15.559113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.119 Running I/O for 5 seconds... 00:14:36.386 00:14:36.386 Latency(us) 00:14:36.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.386 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0x80000 00:14:36.386 nvme0n1 : 5.04 1903.75 7.44 0.00 0.00 67134.86 9790.92 84644.09 00:14:36.386 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x80000 length 0x80000 00:14:36.386 nvme0n1 : 5.03 1934.04 7.55 0.00 0.00 66074.93 6632.56 77485.13 00:14:36.386 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0x80000 00:14:36.386 nvme0n2 : 5.04 1928.56 7.53 0.00 0.00 66159.89 8053.82 77485.13 00:14:36.386 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x80000 length 0x80000 00:14:36.386 nvme0n2 : 5.03 1933.55 7.55 0.00 0.00 65989.09 7737.99 84644.09 00:14:36.386 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0x80000 00:14:36.386 nvme0n3 : 5.05 1953.49 7.63 0.00 0.00 65215.18 8053.82 83801.86 00:14:36.386 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x80000 length 0x80000 00:14:36.386 nvme0n3 : 5.03 1933.10 7.55 0.00 0.00 65900.30 9527.72 88855.24 00:14:36.386 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0x20000 00:14:36.386 nvme1n1 : 5.05 1952.97 7.63 0.00 0.00 65127.69 7001.03 88855.24 00:14:36.386 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x20000 length 0x20000 00:14:36.386 nvme1n1 : 5.06 1947.52 7.61 0.00 0.00 65305.95 6290.40 93487.50 00:14:36.386 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0xbd0bd 00:14:36.386 nvme2n1 : 5.06 2483.87 9.70 0.00 0.00 51085.93 3632.12 136441.21 00:14:36.386 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:36.386 nvme2n1 : 5.05 2070.94 8.09 0.00 0.00 61252.37 1908.18 147390.20 00:14:36.386 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0x0 length 0xa0000 00:14:36.386 nvme3n1 : 5.06 1920.66 7.50 0.00 0.00 65793.82 6790.48 88013.01 00:14:36.386 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.386 Verification LBA range: start 0xa0000 length 0xa0000 00:14:36.386 nvme3n1 : 5.06 1923.92 7.52 0.00 0.00 65935.11 7685.35 70747.30 00:14:36.387 =================================================================================================================== 00:14:36.387 Total : 23886.37 93.31 0.00 0.00 63919.12 1908.18 147390.20 00:14:36.387 00:14:36.387 real 0m6.002s 00:14:36.387 user 0m8.325s 00:14:36.387 sys 0m2.509s 00:14:36.387 01:24:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.387 ************************************ 00:14:36.387 END TEST bdev_verify 00:14:36.387 01:24:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 ************************************ 00:14:36.387 01:24:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:36.387 01:24:21 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:14:36.387 01:24:21 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:36.387 01:24:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 ************************************ 00:14:36.387 START TEST bdev_verify_big_io 00:14:36.387 ************************************ 00:14:36.387 01:24:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:36.387 [2024-07-21 01:24:21.396118] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:36.387 [2024-07-21 01:24:21.396240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86293 ] 00:14:36.387 [2024-07-21 01:24:21.567635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.387 [2024-07-21 01:24:21.632234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.387 [2024-07-21 01:24:21.632352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.645 Running I/O for 5 seconds... 00:14:43.209 00:14:43.209 Latency(us) 00:14:43.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.209 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0x8000 00:14:43.209 nvme0n1 : 5.65 128.66 8.04 0.00 0.00 967977.40 128018.92 2129156.73 00:14:43.209 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x8000 length 0x8000 00:14:43.209 nvme0n1 : 5.70 168.85 10.55 0.00 0.00 725639.84 5737.69 1098267.55 00:14:43.209 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0x8000 00:14:43.209 nvme0n2 : 5.80 133.77 8.36 0.00 0.00 909547.38 5685.05 2048302.68 00:14:43.209 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x8000 length 0x8000 00:14:43.209 nvme0n2 : 5.78 177.02 11.06 0.00 0.00 684149.20 130545.61 758006.75 00:14:43.209 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0x8000 00:14:43.209 nvme0n3 : 5.79 220.91 13.81 0.00 0.00 531842.90 64851.69 565978.37 00:14:43.209 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x8000 length 0x8000 00:14:43.209 nvme0n3 : 5.80 154.47 9.65 0.00 0.00 772505.60 83380.74 596298.64 00:14:43.209 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0x2000 00:14:43.209 nvme1n1 : 5.78 152.15 9.51 0.00 0.00 749464.81 66115.03 1724886.46 00:14:43.209 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x2000 length 0x2000 00:14:43.209 nvme1n1 : 5.79 174.08 10.88 0.00 0.00 659712.36 63167.23 1233024.31 00:14:43.209 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0xbd0b 00:14:43.209 nvme2n1 : 5.80 182.21 11.39 0.00 0.00 620666.38 43164.27 1549702.68 00:14:43.209 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:43.209 nvme2n1 : 5.80 171.12 10.70 0.00 0.00 660443.59 61482.77 1691197.28 00:14:43.209 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0x0 length 0xa000 00:14:43.209 nvme3n1 : 5.80 231.74 14.48 0.00 0.00 477825.62 3053.08 667045.94 00:14:43.209 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:43.209 Verification LBA range: start 0xa000 length 0xa000 00:14:43.209 nvme3n1 : 5.81 174.11 10.88 0.00 0.00 637156.83 7737.99 1516013.49 00:14:43.209 =================================================================================================================== 00:14:43.209 Total : 2069.10 129.32 0.00 0.00 677700.17 3053.08 2129156.73 00:14:43.209 00:14:43.209 real 0m6.802s 00:14:43.209 user 0m12.164s 00:14:43.209 sys 0m0.660s 00:14:43.209 01:24:28 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.209 ************************************ 00:14:43.209 END TEST bdev_verify_big_io 00:14:43.209 ************************************ 00:14:43.209 01:24:28 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.209 01:24:28 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:43.209 01:24:28 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:43.209 01:24:28 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.209 01:24:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.209 ************************************ 00:14:43.209 START TEST bdev_write_zeroes 00:14:43.209 ************************************ 00:14:43.209 01:24:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:43.209 [2024-07-21 01:24:28.268081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:43.209 [2024-07-21 01:24:28.268204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86389 ] 00:14:43.209 [2024-07-21 01:24:28.435871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.209 [2024-07-21 01:24:28.498077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.466 Running I/O for 1 seconds... 00:14:44.840 00:14:44.840 Latency(us) 00:14:44.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.840 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.840 nvme0n1 : 1.01 6440.15 25.16 0.00 0.00 19857.98 9475.08 27372.47 00:14:44.840 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.840 nvme0n2 : 1.02 6428.09 25.11 0.00 0.00 19881.67 10422.59 27161.91 00:14:44.840 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.841 nvme0n3 : 1.02 6416.39 25.06 0.00 0.00 19904.91 11212.18 26951.35 00:14:44.841 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.841 nvme1n1 : 1.02 6404.92 25.02 0.00 0.00 19924.35 11370.10 26635.51 00:14:44.841 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.841 nvme2n1 : 1.02 11020.77 43.05 0.00 0.00 11570.36 4763.86 20213.51 00:14:44.841 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:44.841 nvme3n1 : 1.03 6359.64 24.84 0.00 0.00 19956.05 8685.49 27793.58 00:14:44.841 =================================================================================================================== 00:14:44.841 Total : 43069.95 168.24 0.00 0.00 17762.15 4763.86 27793.58 00:14:44.841 00:14:44.841 real 0m1.932s 00:14:44.841 user 0m1.158s 00:14:44.841 sys 0m0.604s 00:14:44.841 01:24:30 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.841 ************************************ 00:14:44.841 END TEST bdev_write_zeroes 00:14:44.841 ************************************ 00:14:44.841 01:24:30 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:45.100 01:24:30 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.100 01:24:30 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:45.100 01:24:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.100 01:24:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.100 ************************************ 00:14:45.100 START TEST bdev_json_nonenclosed 00:14:45.100 ************************************ 00:14:45.100 01:24:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.100 [2024-07-21 01:24:30.277909] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:45.100 [2024-07-21 01:24:30.278037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86432 ] 00:14:45.359 [2024-07-21 01:24:30.445678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.359 [2024-07-21 01:24:30.508297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.359 [2024-07-21 01:24:30.508404] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:45.359 [2024-07-21 01:24:30.508438] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:45.359 [2024-07-21 01:24:30.508458] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:45.359 00:14:45.359 real 0m0.460s 00:14:45.359 user 0m0.197s 00:14:45.359 sys 0m0.159s 00:14:45.359 01:24:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.359 ************************************ 00:14:45.359 END TEST bdev_json_nonenclosed 00:14:45.359 ************************************ 00:14:45.359 01:24:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:45.618 01:24:30 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.618 01:24:30 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:45.618 01:24:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.618 01:24:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.618 ************************************ 00:14:45.618 START TEST bdev_json_nonarray 00:14:45.618 ************************************ 00:14:45.618 01:24:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:45.618 [2024-07-21 01:24:30.816040] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:45.618 [2024-07-21 01:24:30.816160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86458 ] 00:14:45.876 [2024-07-21 01:24:30.985262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.876 [2024-07-21 01:24:31.047030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.876 [2024-07-21 01:24:31.047149] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:45.876 [2024-07-21 01:24:31.047178] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:45.876 [2024-07-21 01:24:31.047198] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:46.134 00:14:46.134 real 0m0.466s 00:14:46.134 user 0m0.200s 00:14:46.134 sys 0m0.161s 00:14:46.134 01:24:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.134 01:24:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:46.134 ************************************ 00:14:46.134 END TEST bdev_json_nonarray 00:14:46.134 ************************************ 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:46.134 01:24:31 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:46.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:47.639 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.825 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.825 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.825 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.825 ************************************ 00:14:51.825 END TEST blockdev_xnvme 00:14:51.825 ************************************ 00:14:51.825 00:14:51.825 real 0m53.034s 00:14:51.825 user 1m19.863s 00:14:51.825 sys 0m35.057s 00:14:51.825 01:24:36 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.825 01:24:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.825 01:24:37 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:51.825 01:24:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:51.825 01:24:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.825 01:24:37 -- common/autotest_common.sh@10 -- # set +x 00:14:51.825 ************************************ 00:14:51.825 START TEST ublk 00:14:51.825 ************************************ 00:14:51.825 01:24:37 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:52.084 * Looking for test storage... 00:14:52.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:52.084 01:24:37 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:52.084 01:24:37 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:52.084 01:24:37 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:52.084 01:24:37 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:52.084 01:24:37 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:52.084 01:24:37 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:52.084 01:24:37 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:52.084 01:24:37 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:52.084 01:24:37 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:52.084 01:24:37 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:52.084 01:24:37 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.084 01:24:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:52.084 ************************************ 00:14:52.084 START TEST test_save_ublk_config 00:14:52.084 ************************************ 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86743 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86743 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86743 ']' 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:52.084 01:24:37 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:52.084 [2024-07-21 01:24:37.340330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:52.084 [2024-07-21 01:24:37.340443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86743 ] 00:14:52.343 [2024-07-21 01:24:37.513458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.343 [2024-07-21 01:24:37.593955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.909 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:52.909 [2024-07-21 01:24:38.106881] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:52.909 [2024-07-21 01:24:38.107238] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:52.909 malloc0 00:14:52.909 [2024-07-21 01:24:38.146985] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:52.909 [2024-07-21 01:24:38.147082] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:52.909 [2024-07-21 01:24:38.147098] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:52.909 [2024-07-21 01:24:38.147113] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:52.909 [2024-07-21 01:24:38.155986] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:52.909 [2024-07-21 01:24:38.156018] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:52.910 [2024-07-21 01:24:38.162872] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:52.910 [2024-07-21 01:24:38.162974] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:52.910 [2024-07-21 01:24:38.179866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:52.910 0 00:14:52.910 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.910 01:24:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:52.910 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.910 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:53.168 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.168 01:24:38 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:53.168 "subsystems": [ 00:14:53.168 { 00:14:53.168 "subsystem": "keyring", 00:14:53.168 "config": [] 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "subsystem": "iobuf", 00:14:53.168 "config": [ 00:14:53.168 { 00:14:53.168 "method": "iobuf_set_options", 00:14:53.168 "params": { 00:14:53.168 "small_pool_count": 8192, 00:14:53.168 "large_pool_count": 1024, 00:14:53.168 "small_bufsize": 8192, 00:14:53.168 "large_bufsize": 135168 00:14:53.168 } 00:14:53.168 } 00:14:53.168 ] 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "subsystem": "sock", 00:14:53.168 "config": [ 00:14:53.168 { 00:14:53.168 "method": "sock_set_default_impl", 00:14:53.168 "params": { 00:14:53.168 "impl_name": "posix" 00:14:53.168 } 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "method": "sock_impl_set_options", 00:14:53.168 "params": { 00:14:53.168 "impl_name": "ssl", 00:14:53.168 "recv_buf_size": 4096, 00:14:53.168 "send_buf_size": 4096, 00:14:53.168 "enable_recv_pipe": true, 00:14:53.168 "enable_quickack": false, 00:14:53.168 "enable_placement_id": 0, 00:14:53.168 "enable_zerocopy_send_server": true, 00:14:53.168 "enable_zerocopy_send_client": false, 00:14:53.168 "zerocopy_threshold": 0, 00:14:53.168 "tls_version": 0, 00:14:53.168 "enable_ktls": false 00:14:53.168 } 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "method": "sock_impl_set_options", 00:14:53.168 "params": { 00:14:53.168 "impl_name": "posix", 00:14:53.168 "recv_buf_size": 2097152, 00:14:53.168 "send_buf_size": 2097152, 00:14:53.168 "enable_recv_pipe": true, 00:14:53.168 "enable_quickack": false, 00:14:53.168 "enable_placement_id": 0, 00:14:53.168 "enable_zerocopy_send_server": true, 00:14:53.168 "enable_zerocopy_send_client": false, 00:14:53.168 "zerocopy_threshold": 0, 00:14:53.168 "tls_version": 0, 00:14:53.168 "enable_ktls": false 00:14:53.168 } 00:14:53.168 } 00:14:53.168 ] 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "subsystem": "vmd", 00:14:53.168 "config": [] 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "subsystem": "accel", 00:14:53.168 "config": [ 00:14:53.168 { 00:14:53.168 "method": "accel_set_options", 00:14:53.168 "params": { 00:14:53.168 "small_cache_size": 128, 00:14:53.168 "large_cache_size": 16, 00:14:53.168 "task_count": 2048, 00:14:53.168 "sequence_count": 2048, 00:14:53.168 "buf_count": 2048 00:14:53.168 } 00:14:53.168 } 00:14:53.168 ] 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "subsystem": "bdev", 00:14:53.168 "config": [ 00:14:53.168 { 00:14:53.168 "method": "bdev_set_options", 00:14:53.168 "params": { 00:14:53.168 "bdev_io_pool_size": 65535, 00:14:53.168 "bdev_io_cache_size": 256, 00:14:53.168 "bdev_auto_examine": true, 00:14:53.168 "iobuf_small_cache_size": 128, 00:14:53.168 "iobuf_large_cache_size": 16 00:14:53.168 } 00:14:53.168 }, 00:14:53.168 { 00:14:53.168 "method": "bdev_raid_set_options", 00:14:53.168 "params": { 00:14:53.168 "process_window_size_kb": 1024 00:14:53.168 } 00:14:53.168 }, 00:14:53.168 { 00:14:53.169 "method": "bdev_iscsi_set_options", 00:14:53.169 "params": { 00:14:53.169 "timeout_sec": 30 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "bdev_nvme_set_options", 00:14:53.169 "params": { 00:14:53.169 "action_on_timeout": "none", 00:14:53.169 "timeout_us": 0, 00:14:53.169 "timeout_admin_us": 0, 00:14:53.169 "keep_alive_timeout_ms": 10000, 00:14:53.169 "arbitration_burst": 0, 00:14:53.169 "low_priority_weight": 0, 00:14:53.169 "medium_priority_weight": 0, 00:14:53.169 "high_priority_weight": 0, 00:14:53.169 "nvme_adminq_poll_period_us": 10000, 00:14:53.169 "nvme_ioq_poll_period_us": 0, 00:14:53.169 "io_queue_requests": 0, 00:14:53.169 "delay_cmd_submit": true, 00:14:53.169 "transport_retry_count": 4, 00:14:53.169 "bdev_retry_count": 3, 00:14:53.169 "transport_ack_timeout": 0, 00:14:53.169 "ctrlr_loss_timeout_sec": 0, 00:14:53.169 "reconnect_delay_sec": 0, 00:14:53.169 "fast_io_fail_timeout_sec": 0, 00:14:53.169 "disable_auto_failback": false, 00:14:53.169 "generate_uuids": false, 00:14:53.169 "transport_tos": 0, 00:14:53.169 "nvme_error_stat": false, 00:14:53.169 "rdma_srq_size": 0, 00:14:53.169 "io_path_stat": false, 00:14:53.169 "allow_accel_sequence": false, 00:14:53.169 "rdma_max_cq_size": 0, 00:14:53.169 "rdma_cm_event_timeout_ms": 0, 00:14:53.169 "dhchap_digests": [ 00:14:53.169 "sha256", 00:14:53.169 "sha384", 00:14:53.169 "sha512" 00:14:53.169 ], 00:14:53.169 "dhchap_dhgroups": [ 00:14:53.169 "null", 00:14:53.169 "ffdhe2048", 00:14:53.169 "ffdhe3072", 00:14:53.169 "ffdhe4096", 00:14:53.169 "ffdhe6144", 00:14:53.169 "ffdhe8192" 00:14:53.169 ] 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "bdev_nvme_set_hotplug", 00:14:53.169 "params": { 00:14:53.169 "period_us": 100000, 00:14:53.169 "enable": false 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "bdev_malloc_create", 00:14:53.169 "params": { 00:14:53.169 "name": "malloc0", 00:14:53.169 "num_blocks": 8192, 00:14:53.169 "block_size": 4096, 00:14:53.169 "physical_block_size": 4096, 00:14:53.169 "uuid": "cff22f7c-2347-47bc-bb4c-5d04ee7402aa", 00:14:53.169 "optimal_io_boundary": 0 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "bdev_wait_for_examine" 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "scsi", 00:14:53.169 "config": null 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "scheduler", 00:14:53.169 "config": [ 00:14:53.169 { 00:14:53.169 "method": "framework_set_scheduler", 00:14:53.169 "params": { 00:14:53.169 "name": "static" 00:14:53.169 } 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "vhost_scsi", 00:14:53.169 "config": [] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "vhost_blk", 00:14:53.169 "config": [] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "ublk", 00:14:53.169 "config": [ 00:14:53.169 { 00:14:53.169 "method": "ublk_create_target", 00:14:53.169 "params": { 00:14:53.169 "cpumask": "1" 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "ublk_start_disk", 00:14:53.169 "params": { 00:14:53.169 "bdev_name": "malloc0", 00:14:53.169 "ublk_id": 0, 00:14:53.169 "num_queues": 1, 00:14:53.169 "queue_depth": 128 00:14:53.169 } 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "nbd", 00:14:53.169 "config": [] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "nvmf", 00:14:53.169 "config": [ 00:14:53.169 { 00:14:53.169 "method": "nvmf_set_config", 00:14:53.169 "params": { 00:14:53.169 "discovery_filter": "match_any", 00:14:53.169 "admin_cmd_passthru": { 00:14:53.169 "identify_ctrlr": false 00:14:53.169 } 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "nvmf_set_max_subsystems", 00:14:53.169 "params": { 00:14:53.169 "max_subsystems": 1024 00:14:53.169 } 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "method": "nvmf_set_crdt", 00:14:53.169 "params": { 00:14:53.169 "crdt1": 0, 00:14:53.169 "crdt2": 0, 00:14:53.169 "crdt3": 0 00:14:53.169 } 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }, 00:14:53.169 { 00:14:53.169 "subsystem": "iscsi", 00:14:53.169 "config": [ 00:14:53.169 { 00:14:53.169 "method": "iscsi_set_options", 00:14:53.169 "params": { 00:14:53.169 "node_base": "iqn.2016-06.io.spdk", 00:14:53.169 "max_sessions": 128, 00:14:53.169 "max_connections_per_session": 2, 00:14:53.169 "max_queue_depth": 64, 00:14:53.169 "default_time2wait": 2, 00:14:53.169 "default_time2retain": 20, 00:14:53.169 "first_burst_length": 8192, 00:14:53.169 "immediate_data": true, 00:14:53.169 "allow_duplicated_isid": false, 00:14:53.169 "error_recovery_level": 0, 00:14:53.169 "nop_timeout": 60, 00:14:53.169 "nop_in_interval": 30, 00:14:53.169 "disable_chap": false, 00:14:53.169 "require_chap": false, 00:14:53.169 "mutual_chap": false, 00:14:53.169 "chap_group": 0, 00:14:53.169 "max_large_datain_per_connection": 64, 00:14:53.169 "max_r2t_per_connection": 4, 00:14:53.169 "pdu_pool_size": 36864, 00:14:53.169 "immediate_data_pool_size": 16384, 00:14:53.169 "data_out_pool_size": 2048 00:14:53.169 } 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 } 00:14:53.169 ] 00:14:53.169 }' 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86743 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86743 ']' 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86743 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.169 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86743 00:14:53.428 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.428 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.428 killing process with pid 86743 00:14:53.428 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86743' 00:14:53.428 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86743 00:14:53.428 01:24:38 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86743 00:14:53.686 [2024-07-21 01:24:38.932578] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:53.686 [2024-07-21 01:24:38.966945] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:53.686 [2024-07-21 01:24:38.967102] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:53.686 [2024-07-21 01:24:38.974865] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:53.686 [2024-07-21 01:24:38.974921] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:53.686 [2024-07-21 01:24:38.974942] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:53.686 [2024-07-21 01:24:38.974978] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:53.686 [2024-07-21 01:24:38.975145] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86781 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86781 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86781 ']' 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:54.253 01:24:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:54.253 "subsystems": [ 00:14:54.253 { 00:14:54.253 "subsystem": "keyring", 00:14:54.253 "config": [] 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "subsystem": "iobuf", 00:14:54.253 "config": [ 00:14:54.253 { 00:14:54.253 "method": "iobuf_set_options", 00:14:54.253 "params": { 00:14:54.253 "small_pool_count": 8192, 00:14:54.253 "large_pool_count": 1024, 00:14:54.253 "small_bufsize": 8192, 00:14:54.253 "large_bufsize": 135168 00:14:54.253 } 00:14:54.253 } 00:14:54.253 ] 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "subsystem": "sock", 00:14:54.253 "config": [ 00:14:54.253 { 00:14:54.253 "method": "sock_set_default_impl", 00:14:54.253 "params": { 00:14:54.253 "impl_name": "posix" 00:14:54.253 } 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "method": "sock_impl_set_options", 00:14:54.253 "params": { 00:14:54.253 "impl_name": "ssl", 00:14:54.253 "recv_buf_size": 4096, 00:14:54.253 "send_buf_size": 4096, 00:14:54.253 "enable_recv_pipe": true, 00:14:54.253 "enable_quickack": false, 00:14:54.253 "enable_placement_id": 0, 00:14:54.253 "enable_zerocopy_send_server": true, 00:14:54.253 "enable_zerocopy_send_client": false, 00:14:54.253 "zerocopy_threshold": 0, 00:14:54.253 "tls_version": 0, 00:14:54.253 "enable_ktls": false 00:14:54.253 } 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "method": "sock_impl_set_options", 00:14:54.253 "params": { 00:14:54.253 "impl_name": "posix", 00:14:54.253 "recv_buf_size": 2097152, 00:14:54.253 "send_buf_size": 2097152, 00:14:54.253 "enable_recv_pipe": true, 00:14:54.253 "enable_quickack": false, 00:14:54.253 "enable_placement_id": 0, 00:14:54.253 "enable_zerocopy_send_server": true, 00:14:54.253 "enable_zerocopy_send_client": false, 00:14:54.253 "zerocopy_threshold": 0, 00:14:54.253 "tls_version": 0, 00:14:54.253 "enable_ktls": false 00:14:54.253 } 00:14:54.253 } 00:14:54.253 ] 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "subsystem": "vmd", 00:14:54.253 "config": [] 00:14:54.253 }, 00:14:54.253 { 00:14:54.253 "subsystem": "accel", 00:14:54.253 "config": [ 00:14:54.253 { 00:14:54.253 "method": "accel_set_options", 00:14:54.253 "params": { 00:14:54.253 "small_cache_size": 128, 00:14:54.253 "large_cache_size": 16, 00:14:54.253 "task_count": 2048, 00:14:54.253 "sequence_count": 2048, 00:14:54.253 "buf_count": 2048 00:14:54.253 } 00:14:54.253 } 00:14:54.253 ] 00:14:54.253 }, 00:14:54.253 { 00:14:54.254 "subsystem": "bdev", 00:14:54.254 "config": [ 00:14:54.254 { 00:14:54.254 "method": "bdev_set_options", 00:14:54.254 "params": { 00:14:54.254 "bdev_io_pool_size": 65535, 00:14:54.254 "bdev_io_cache_size": 256, 00:14:54.254 "bdev_auto_examine": true, 00:14:54.254 "iobuf_small_cache_size": 128, 00:14:54.254 "iobuf_large_cache_size": 16 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_raid_set_options", 00:14:54.254 "params": { 00:14:54.254 "process_window_size_kb": 1024 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_iscsi_set_options", 00:14:54.254 "params": { 00:14:54.254 "timeout_sec": 30 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_nvme_set_options", 00:14:54.254 "params": { 00:14:54.254 "action_on_timeout": "none", 00:14:54.254 "timeout_us": 0, 00:14:54.254 "timeout_admin_us": 0, 00:14:54.254 "keep_alive_timeout_ms": 10000, 00:14:54.254 "arbitration_burst": 0, 00:14:54.254 "low_priority_weight": 0, 00:14:54.254 "medium_priority_weight": 0, 00:14:54.254 "high_priority_weight": 0, 00:14:54.254 "nvme_adminq_poll_period_us": 10000, 00:14:54.254 "nvme_ioq_poll_period_us": 0, 00:14:54.254 "io_queue_requests": 0, 00:14:54.254 "delay_cmd_submit": true, 00:14:54.254 "transport_retry_count": 4, 00:14:54.254 "bdev_retry_count": 3, 00:14:54.254 "transport_ack_timeout": 0, 00:14:54.254 "ctrlr_loss_timeout_sec": 0, 00:14:54.254 "reconnect_delay_sec": 0, 00:14:54.254 "fast_io_fail_timeout_sec": 0, 00:14:54.254 "disable_auto_failback": false, 00:14:54.254 "generate_uuids": false, 00:14:54.254 "transport_tos": 0, 00:14:54.254 "nvme_error_stat": false, 00:14:54.254 "rdma_srq_size": 0, 00:14:54.254 "io_path_stat": false, 00:14:54.254 "allow_accel_sequence": false, 00:14:54.254 "rdma_max_cq_size": 0, 00:14:54.254 "rdma_cm_event_timeout_ms": 0, 00:14:54.254 "dhchap_digests": [ 00:14:54.254 "sha256", 00:14:54.254 "sha384", 00:14:54.254 "sha512" 00:14:54.254 ], 00:14:54.254 "dhchap_dhgroups": [ 00:14:54.254 "null", 00:14:54.254 "ffdhe2048", 00:14:54.254 "ffdhe3072", 00:14:54.254 "ffdhe4096", 00:14:54.254 "ffdhe6144", 00:14:54.254 "ffdhe8192" 00:14:54.254 ] 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_nvme_set_hotplug", 00:14:54.254 "params": { 00:14:54.254 "period_us": 100000, 00:14:54.254 "enable": false 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_malloc_create", 00:14:54.254 "params": { 00:14:54.254 "name": "malloc0", 00:14:54.254 "num_blocks": 8192, 00:14:54.254 "block_size": 4096, 00:14:54.254 "physical_block_size": 4096, 00:14:54.254 "uuid": "cff22f7c-2347-47bc-bb4c-5d04ee7402aa", 00:14:54.254 "optimal_io_boundary": 0 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "bdev_wait_for_examine" 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "scsi", 00:14:54.254 "config": null 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "scheduler", 00:14:54.254 "config": [ 00:14:54.254 { 00:14:54.254 "method": "framework_set_scheduler", 00:14:54.254 "params": { 00:14:54.254 "name": "static" 00:14:54.254 } 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "vhost_scsi", 00:14:54.254 "config": [] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "vhost_blk", 00:14:54.254 "config": [] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "ublk", 00:14:54.254 "config": [ 00:14:54.254 { 00:14:54.254 "method": "ublk_create_target", 00:14:54.254 "params": { 00:14:54.254 "cpumask": "1" 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "ublk_start_disk", 00:14:54.254 "params": { 00:14:54.254 "bdev_name": "malloc0", 00:14:54.254 "ublk_id": 0, 00:14:54.254 "num_queues": 1, 00:14:54.254 "queue_depth": 128 00:14:54.254 } 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "nbd", 00:14:54.254 "config": [] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "nvmf", 00:14:54.254 "config": [ 00:14:54.254 { 00:14:54.254 "method": "nvmf_set_config", 00:14:54.254 "params": { 00:14:54.254 "discovery_filter": "match_any", 00:14:54.254 "admin_cmd_passthru": { 00:14:54.254 "identify_ctrlr": false 00:14:54.254 } 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "nvmf_set_max_subsystems", 00:14:54.254 "params": { 00:14:54.254 "max_subsystems": 1024 00:14:54.254 } 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "method": "nvmf_set_crdt", 00:14:54.254 "params": { 00:14:54.254 "crdt1": 0, 00:14:54.254 "crdt2": 0, 00:14:54.254 "crdt3": 0 00:14:54.254 } 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 }, 00:14:54.254 { 00:14:54.254 "subsystem": "iscsi", 00:14:54.254 "config": [ 00:14:54.254 { 00:14:54.254 "method": "iscsi_set_options", 00:14:54.254 "params": { 00:14:54.254 "node_base": "iqn.2016-06.io.spdk", 00:14:54.254 "max_sessions": 128, 00:14:54.254 "max_connections_per_session": 2, 00:14:54.254 "max_queue_depth": 64, 00:14:54.254 "default_time2wait": 2, 00:14:54.254 "default_time2retain": 20, 00:14:54.254 "first_burst_length": 8192, 00:14:54.254 "immediate_data": true, 00:14:54.254 "allow_duplicated_isid": false, 00:14:54.254 "error_recovery_level": 0, 00:14:54.254 "nop_timeout": 60, 00:14:54.254 "nop_in_interval": 30, 00:14:54.254 "disable_chap": false, 00:14:54.254 "require_chap": false, 00:14:54.254 "mutual_chap": false, 00:14:54.254 "chap_group": 0, 00:14:54.254 "max_large_datain_per_connection": 64, 00:14:54.254 "max_r2t_per_connection": 4, 00:14:54.254 "pdu_pool_size": 36864, 00:14:54.254 "immediate_data_pool_size": 16384, 00:14:54.254 "data_out_pool_size": 2048 00:14:54.254 } 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 } 00:14:54.254 ] 00:14:54.254 }' 00:14:54.254 [2024-07-21 01:24:39.439447] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.254 [2024-07-21 01:24:39.439561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86781 ] 00:14:54.512 [2024-07-21 01:24:39.605655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.512 [2024-07-21 01:24:39.686886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.076 [2024-07-21 01:24:40.141847] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:55.076 [2024-07-21 01:24:40.142214] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:55.076 [2024-07-21 01:24:40.149979] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:55.076 [2024-07-21 01:24:40.150123] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:55.076 [2024-07-21 01:24:40.150136] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:55.076 [2024-07-21 01:24:40.150151] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:55.076 [2024-07-21 01:24:40.158935] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:55.076 [2024-07-21 01:24:40.158958] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:55.076 [2024-07-21 01:24:40.165868] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:55.076 [2024-07-21 01:24:40.165965] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:55.076 [2024-07-21 01:24:40.182858] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86781 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86781 ']' 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86781 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86781 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:55.076 killing process with pid 86781 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86781' 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86781 00:14:55.076 01:24:40 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86781 00:14:55.641 [2024-07-21 01:24:40.750431] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:55.641 [2024-07-21 01:24:40.788870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:55.641 [2024-07-21 01:24:40.792843] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:55.641 [2024-07-21 01:24:40.801866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:55.641 [2024-07-21 01:24:40.801935] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:55.641 [2024-07-21 01:24:40.801945] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:55.641 [2024-07-21 01:24:40.801975] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:55.641 [2024-07-21 01:24:40.802133] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:55.899 01:24:41 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:55.899 00:14:55.899 real 0m3.937s 00:14:55.899 user 0m2.629s 00:14:55.899 sys 0m1.932s 00:14:55.899 01:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.899 ************************************ 00:14:55.899 END TEST test_save_ublk_config 00:14:55.899 ************************************ 00:14:55.899 01:24:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:56.157 01:24:41 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86832 00:14:56.157 01:24:41 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:56.157 01:24:41 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:56.157 01:24:41 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86832 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@827 -- # '[' -z 86832 ']' 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.157 01:24:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.157 [2024-07-21 01:24:41.333208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:56.157 [2024-07-21 01:24:41.333512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86832 ] 00:14:56.415 [2024-07-21 01:24:41.508065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.415 [2024-07-21 01:24:41.571615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.415 [2024-07-21 01:24:41.571721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.981 01:24:42 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.981 01:24:42 ublk -- common/autotest_common.sh@860 -- # return 0 00:14:56.981 01:24:42 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:56.981 01:24:42 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:56.981 01:24:42 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.981 01:24:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.981 ************************************ 00:14:56.981 START TEST test_create_ublk 00:14:56.981 ************************************ 00:14:56.981 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:14:56.981 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:56.981 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.981 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.982 [2024-07-21 01:24:42.124874] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:56.982 [2024-07-21 01:24:42.127017] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.982 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:56.982 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.982 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:56.982 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.982 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.982 [2024-07-21 01:24:42.253028] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:56.982 [2024-07-21 01:24:42.253539] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:56.982 [2024-07-21 01:24:42.253571] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:56.982 [2024-07-21 01:24:42.253599] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.982 [2024-07-21 01:24:42.262253] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.982 [2024-07-21 01:24:42.262279] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.982 [2024-07-21 01:24:42.268870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.982 [2024-07-21 01:24:42.279892] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:57.240 [2024-07-21 01:24:42.294866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:57.240 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:57.240 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.240 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.240 01:24:42 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:57.240 { 00:14:57.240 "ublk_device": "/dev/ublkb0", 00:14:57.240 "id": 0, 00:14:57.240 "queue_depth": 512, 00:14:57.240 "num_queues": 4, 00:14:57.240 "bdev_name": "Malloc0" 00:14:57.240 } 00:14:57.240 ]' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:57.240 01:24:42 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:57.498 fio: verification read phase will never start because write phase uses all of runtime 00:14:57.498 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:57.498 fio-3.35 00:14:57.498 Starting 1 process 00:15:07.497 00:15:07.497 fio_test: (groupid=0, jobs=1): err= 0: pid=86876: Sun Jul 21 01:24:52 2024 00:15:07.497 write: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(549MiB/10001msec); 0 zone resets 00:15:07.497 clat (usec): min=40, max=9626, avg=70.30, stdev=157.97 00:15:07.497 lat (usec): min=40, max=9656, avg=70.77, stdev=158.02 00:15:07.497 clat percentiles (usec): 00:15:07.497 | 1.00th=[ 56], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:15:07.497 | 30.00th=[ 60], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:15:07.497 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 78], 00:15:07.497 | 99.00th=[ 93], 99.50th=[ 109], 99.90th=[ 3458], 99.95th=[ 3687], 00:15:07.497 | 99.99th=[ 4047] 00:15:07.497 bw ( KiB/s): min=19984, max=61456, per=99.74%, avg=56076.05, stdev=12813.34, samples=19 00:15:07.497 iops : min= 4996, max=15364, avg=14019.00, stdev=3203.33, samples=19 00:15:07.497 lat (usec) : 50=0.01%, 100=99.30%, 250=0.37%, 500=0.01%, 750=0.01% 00:15:07.497 lat (usec) : 1000=0.02% 00:15:07.497 lat (msec) : 2=0.06%, 4=0.20%, 10=0.01% 00:15:07.497 cpu : usr=2.78%, sys=9.34%, ctx=140564, majf=0, minf=797 00:15:07.497 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.497 issued rwts: total=0,140563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.497 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.497 00:15:07.497 Run status group 0 (all jobs): 00:15:07.497 WRITE: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=549MiB (576MB), run=10001-10001msec 00:15:07.497 00:15:07.497 Disk stats (read/write): 00:15:07.497 ublkb0: ios=0/139014, merge=0/0, ticks=0/8734, in_queue=8734, util=99.13% 00:15:07.497 01:24:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:07.497 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.497 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.497 [2024-07-21 01:24:52.797772] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.811 [2024-07-21 01:24:52.839358] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.811 [2024-07-21 01:24:52.840762] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.811 [2024-07-21 01:24:52.845850] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.811 [2024-07-21 01:24:52.846186] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:07.811 [2024-07-21 01:24:52.846204] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.811 [2024-07-21 01:24:52.859027] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:07.811 request: 00:15:07.811 { 00:15:07.811 "ublk_id": 0, 00:15:07.811 "method": "ublk_stop_disk", 00:15:07.811 "req_id": 1 00:15:07.811 } 00:15:07.811 Got JSON-RPC error response 00:15:07.811 response: 00:15:07.811 { 00:15:07.811 "code": -19, 00:15:07.811 "message": "No such device" 00:15:07.811 } 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.811 01:24:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.811 [2024-07-21 01:24:52.884973] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:07.811 [2024-07-21 01:24:52.887169] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:07.811 [2024-07-21 01:24:52.887208] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:07.811 01:24:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.811 01:24:52 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.811 01:24:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:07.811 01:24:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:07.811 01:24:53 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:07.811 01:24:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:07.811 01:24:53 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.811 01:24:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.811 01:24:53 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.811 01:24:53 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:07.811 01:24:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:07.811 ************************************ 00:15:07.811 END TEST test_create_ublk 00:15:07.811 ************************************ 00:15:07.811 01:24:53 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:07.811 00:15:07.811 real 0m10.976s 00:15:07.811 user 0m0.673s 00:15:07.811 sys 0m1.064s 00:15:07.811 01:24:53 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:07.811 01:24:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.070 01:24:53 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:08.070 01:24:53 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:08.070 01:24:53 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.070 01:24:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.070 ************************************ 00:15:08.070 START TEST test_create_multi_ublk 00:15:08.070 ************************************ 00:15:08.070 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:15:08.070 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:08.070 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.070 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.070 [2024-07-21 01:24:53.171849] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:08.070 [2024-07-21 01:24:53.173074] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.071 [2024-07-21 01:24:53.300004] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:08.071 [2024-07-21 01:24:53.300539] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:08.071 [2024-07-21 01:24:53.300559] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:08.071 [2024-07-21 01:24:53.300571] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.071 [2024-07-21 01:24:53.306859] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.071 [2024-07-21 01:24:53.306890] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.071 [2024-07-21 01:24:53.314863] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.071 [2024-07-21 01:24:53.315497] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:08.071 [2024-07-21 01:24:53.326939] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.071 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.328 [2024-07-21 01:24:53.455016] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:08.328 [2024-07-21 01:24:53.455509] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:08.328 [2024-07-21 01:24:53.455525] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:08.328 [2024-07-21 01:24:53.455533] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.328 [2024-07-21 01:24:53.464250] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.328 [2024-07-21 01:24:53.464274] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.328 [2024-07-21 01:24:53.470870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.328 [2024-07-21 01:24:53.471476] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:08.328 [2024-07-21 01:24:53.479949] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.328 [2024-07-21 01:24:53.613013] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:08.328 [2024-07-21 01:24:53.613633] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:08.328 [2024-07-21 01:24:53.613652] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:08.328 [2024-07-21 01:24:53.613665] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.328 [2024-07-21 01:24:53.618872] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.328 [2024-07-21 01:24:53.618904] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.328 [2024-07-21 01:24:53.627849] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.328 [2024-07-21 01:24:53.628494] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:08.328 [2024-07-21 01:24:53.633303] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.328 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.585 [2024-07-21 01:24:53.759056] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:08.585 [2024-07-21 01:24:53.759585] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:08.585 [2024-07-21 01:24:53.759608] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:08.585 [2024-07-21 01:24:53.759618] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:08.585 [2024-07-21 01:24:53.766908] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:08.585 [2024-07-21 01:24:53.766935] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:08.585 [2024-07-21 01:24:53.774875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:08.585 [2024-07-21 01:24:53.775504] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:08.585 [2024-07-21 01:24:53.780801] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:08.585 { 00:15:08.585 "ublk_device": "/dev/ublkb0", 00:15:08.585 "id": 0, 00:15:08.585 "queue_depth": 512, 00:15:08.585 "num_queues": 4, 00:15:08.585 "bdev_name": "Malloc0" 00:15:08.585 }, 00:15:08.585 { 00:15:08.585 "ublk_device": "/dev/ublkb1", 00:15:08.585 "id": 1, 00:15:08.585 "queue_depth": 512, 00:15:08.585 "num_queues": 4, 00:15:08.585 "bdev_name": "Malloc1" 00:15:08.585 }, 00:15:08.585 { 00:15:08.585 "ublk_device": "/dev/ublkb2", 00:15:08.585 "id": 2, 00:15:08.585 "queue_depth": 512, 00:15:08.585 "num_queues": 4, 00:15:08.585 "bdev_name": "Malloc2" 00:15:08.585 }, 00:15:08.585 { 00:15:08.585 "ublk_device": "/dev/ublkb3", 00:15:08.585 "id": 3, 00:15:08.585 "queue_depth": 512, 00:15:08.585 "num_queues": 4, 00:15:08.585 "bdev_name": "Malloc3" 00:15:08.585 } 00:15:08.585 ]' 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:08.585 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:08.842 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:08.842 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:08.843 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:08.843 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:08.843 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:08.843 01:24:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:08.843 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:09.099 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:09.356 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:09.357 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:09.357 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:09.357 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.614 [2024-07-21 01:24:54.694958] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.614 [2024-07-21 01:24:54.736905] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.614 [2024-07-21 01:24:54.738206] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.614 [2024-07-21 01:24:54.744870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.614 [2024-07-21 01:24:54.745157] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:09.614 [2024-07-21 01:24:54.745172] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.614 [2024-07-21 01:24:54.756001] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.614 [2024-07-21 01:24:54.793394] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.614 [2024-07-21 01:24:54.794745] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.614 [2024-07-21 01:24:54.800867] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.614 [2024-07-21 01:24:54.801151] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:09.614 [2024-07-21 01:24:54.801164] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.614 [2024-07-21 01:24:54.816984] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.614 [2024-07-21 01:24:54.850371] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.614 [2024-07-21 01:24:54.851696] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.614 [2024-07-21 01:24:54.856866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.614 [2024-07-21 01:24:54.857134] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:09.614 [2024-07-21 01:24:54.857147] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.614 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:09.614 [2024-07-21 01:24:54.872938] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.614 [2024-07-21 01:24:54.912918] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.614 [2024-07-21 01:24:54.913975] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.614 [2024-07-21 01:24:54.920865] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.614 [2024-07-21 01:24:54.921113] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:09.614 [2024-07-21 01:24:54.921126] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:09.873 01:24:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.873 01:24:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:09.873 [2024-07-21 01:24:55.096959] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.873 [2024-07-21 01:24:55.098393] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:09.873 [2024-07-21 01:24:55.098426] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:09.873 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:09.873 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.873 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:09.873 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.873 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.132 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:10.391 ************************************ 00:15:10.391 END TEST test_create_multi_ublk 00:15:10.391 ************************************ 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:10.391 00:15:10.391 real 0m2.420s 00:15:10.391 user 0m0.993s 00:15:10.391 sys 0m0.239s 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.391 01:24:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.391 01:24:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:10.391 01:24:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:10.391 01:24:55 ublk -- ublk/ublk.sh@130 -- # killprocess 86832 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@946 -- # '[' -z 86832 ']' 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@950 -- # kill -0 86832 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@951 -- # uname 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86832 00:15:10.391 killing process with pid 86832 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86832' 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@965 -- # kill 86832 00:15:10.391 01:24:55 ublk -- common/autotest_common.sh@970 -- # wait 86832 00:15:10.650 [2024-07-21 01:24:55.906260] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:10.650 [2024-07-21 01:24:55.906337] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:11.218 ************************************ 00:15:11.218 END TEST ublk 00:15:11.218 ************************************ 00:15:11.218 00:15:11.218 real 0m19.194s 00:15:11.218 user 0m29.581s 00:15:11.218 sys 0m7.998s 00:15:11.218 01:24:56 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:11.218 01:24:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.218 01:24:56 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:11.218 01:24:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:11.218 01:24:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.218 01:24:56 -- common/autotest_common.sh@10 -- # set +x 00:15:11.218 ************************************ 00:15:11.218 START TEST ublk_recovery 00:15:11.218 ************************************ 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:11.218 * Looking for test storage... 00:15:11.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:11.218 01:24:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87181 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.218 01:24:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87181 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87181 ']' 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.218 01:24:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.477 [2024-07-21 01:24:56.566263] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:11.477 [2024-07-21 01:24:56.566381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87181 ] 00:15:11.477 [2024-07-21 01:24:56.734425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.735 [2024-07-21 01:24:56.801512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.735 [2024-07-21 01:24:56.801601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:12.303 01:24:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.303 [2024-07-21 01:24:57.347856] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:12.303 [2024-07-21 01:24:57.350090] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.303 01:24:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.303 malloc0 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.303 01:24:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.303 [2024-07-21 01:24:57.419015] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:12.303 [2024-07-21 01:24:57.419146] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:12.303 [2024-07-21 01:24:57.419164] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:12.303 [2024-07-21 01:24:57.419173] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:12.303 [2024-07-21 01:24:57.426869] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:12.303 [2024-07-21 01:24:57.426905] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:12.303 [2024-07-21 01:24:57.434869] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:12.303 [2024-07-21 01:24:57.435036] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:12.303 [2024-07-21 01:24:57.457870] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:12.303 1 00:15:12.303 01:24:57 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.303 01:24:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:13.236 01:24:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87214 00:15:13.236 01:24:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:13.236 01:24:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:13.495 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:13.496 fio-3.35 00:15:13.496 Starting 1 process 00:15:18.766 01:25:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87181 00:15:18.766 01:25:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:24.041 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87181 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:24.041 01:25:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87319 00:15:24.041 01:25:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:24.041 01:25:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:24.041 01:25:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87319 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87319 ']' 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:24.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:24.041 01:25:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.041 [2024-07-21 01:25:08.578508] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:24.041 [2024-07-21 01:25:08.578625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87319 ] 00:15:24.041 [2024-07-21 01:25:08.749043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:24.041 [2024-07-21 01:25:08.813511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.041 [2024-07-21 01:25:08.813613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:15:24.300 01:25:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.300 [2024-07-21 01:25:09.362854] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:24.300 [2024-07-21 01:25:09.365084] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.300 01:25:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.300 malloc0 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.300 01:25:09 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.300 [2024-07-21 01:25:09.427001] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:24.300 [2024-07-21 01:25:09.427060] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:24.300 [2024-07-21 01:25:09.427073] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:24.300 [2024-07-21 01:25:09.434897] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:24.300 [2024-07-21 01:25:09.434924] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:24.300 [2024-07-21 01:25:09.435027] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:24.300 1 00:15:24.300 01:25:09 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.300 01:25:09 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87214 00:15:24.300 [2024-07-21 01:25:09.442869] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:24.300 [2024-07-21 01:25:09.449491] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:24.300 [2024-07-21 01:25:09.457113] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:24.300 [2024-07-21 01:25:09.457134] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:20.513 00:16:20.513 fio_test: (groupid=0, jobs=1): err= 0: pid=87217: Sun Jul 21 01:25:58 2024 00:16:20.513 read: IOPS=22.8k, BW=89.1MiB/s (93.4MB/s)(5346MiB/60002msec) 00:16:20.513 slat (usec): min=2, max=907, avg= 7.33, stdev= 2.59 00:16:20.513 clat (usec): min=950, max=5989.8k, avg=2753.90, stdev=40613.58 00:16:20.513 lat (usec): min=956, max=5989.8k, avg=2761.23, stdev=40613.57 00:16:20.513 clat percentiles (usec): 00:16:20.513 | 1.00th=[ 1958], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2212], 00:16:20.513 | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2343], 60.00th=[ 2376], 00:16:20.513 | 70.00th=[ 2409], 80.00th=[ 2474], 90.00th=[ 2900], 95.00th=[ 3720], 00:16:20.513 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6980], 99.95th=[ 7439], 00:16:20.513 | 99.99th=[ 9241] 00:16:20.513 bw ( KiB/s): min=24648, max=109056, per=100.00%, avg=100556.00, stdev=10708.88, samples=108 00:16:20.513 iops : min= 6162, max=27264, avg=25138.96, stdev=2677.24, samples=108 00:16:20.513 write: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(5340MiB/60002msec); 0 zone resets 00:16:20.513 slat (usec): min=2, max=883, avg= 7.53, stdev= 2.50 00:16:20.513 clat (usec): min=1044, max=5989.9k, avg=2844.76, stdev=41277.94 00:16:20.513 lat (usec): min=1050, max=5989.9k, avg=2852.29, stdev=41277.93 00:16:20.513 clat percentiles (usec): 00:16:20.513 | 1.00th=[ 1942], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:16:20.514 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:16:20.514 | 70.00th=[ 2540], 80.00th=[ 2573], 90.00th=[ 2900], 95.00th=[ 3720], 00:16:20.514 | 99.00th=[ 5014], 99.50th=[ 5407], 99.90th=[ 7111], 99.95th=[ 7635], 00:16:20.514 | 99.99th=[ 9503] 00:16:20.514 bw ( KiB/s): min=25016, max=107544, per=100.00%, avg=100422.88, stdev=10566.58, samples=108 00:16:20.514 iops : min= 6254, max=26886, avg=25105.69, stdev=2641.67, samples=108 00:16:20.514 lat (usec) : 1000=0.01% 00:16:20.514 lat (msec) : 2=1.76%, 4=94.41%, 10=3.82%, 20=0.01%, >=2000=0.01% 00:16:20.514 cpu : usr=11.42%, sys=33.06%, ctx=115283, majf=0, minf=13 00:16:20.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:20.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.514 issued rwts: total=1368689,1367081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.514 00:16:20.514 Run status group 0 (all jobs): 00:16:20.514 READ: bw=89.1MiB/s (93.4MB/s), 89.1MiB/s-89.1MiB/s (93.4MB/s-93.4MB/s), io=5346MiB (5606MB), run=60002-60002msec 00:16:20.514 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=5340MiB (5600MB), run=60002-60002msec 00:16:20.514 00:16:20.514 Disk stats (read/write): 00:16:20.514 ublkb1: ios=1365884/1364295, merge=0/0, ticks=3654778/3638618, in_queue=7293396, util=99.92% 00:16:20.514 01:25:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 [2024-07-21 01:25:58.743500] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.514 [2024-07-21 01:25:58.779914] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.514 [2024-07-21 01:25:58.780165] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.514 [2024-07-21 01:25:58.787923] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.514 [2024-07-21 01:25:58.788174] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:20.514 [2024-07-21 01:25:58.788219] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.514 01:25:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 [2024-07-21 01:25:58.803938] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.514 [2024-07-21 01:25:58.806145] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:20.514 [2024-07-21 01:25:58.806186] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.514 01:25:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:20.514 01:25:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:20.514 01:25:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87319 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 87319 ']' 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 87319 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87319 00:16:20.514 killing process with pid 87319 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87319' 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@965 -- # kill 87319 00:16:20.514 01:25:58 ublk_recovery -- common/autotest_common.sh@970 -- # wait 87319 00:16:20.514 [2024-07-21 01:25:59.081645] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.514 [2024-07-21 01:25:59.081727] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:20.514 00:16:20.514 real 1m3.134s 00:16:20.514 user 1m41.830s 00:16:20.514 sys 0m40.406s 00:16:20.514 01:25:59 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.514 ************************************ 00:16:20.514 END TEST ublk_recovery 00:16:20.514 01:25:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 ************************************ 00:16:20.514 01:25:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:20.514 01:25:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.514 01:25:59 -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 01:25:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:16:20.514 01:25:59 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.514 01:25:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:20.514 01:25:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.514 01:25:59 -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 ************************************ 00:16:20.514 START TEST ftl 00:16:20.514 ************************************ 00:16:20.514 01:25:59 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.514 * Looking for test storage... 00:16:20.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.514 01:25:59 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.514 01:25:59 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.514 01:25:59 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.514 01:25:59 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.514 01:25:59 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.514 01:25:59 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.514 01:25:59 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.514 01:25:59 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.514 01:25:59 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.514 01:25:59 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.514 01:25:59 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.514 01:25:59 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.514 01:25:59 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.514 01:25:59 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.514 01:25:59 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.514 01:25:59 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.514 01:25:59 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.514 01:25:59 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.514 01:25:59 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.514 01:25:59 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.514 01:25:59 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.514 01:25:59 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.514 01:25:59 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:20.514 01:25:59 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:20.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.514 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.514 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.514 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.514 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.514 01:26:00 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:20.514 01:26:00 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88104 00:16:20.514 01:26:00 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88104 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@827 -- # '[' -z 88104 ']' 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:20.514 01:26:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 [2024-07-21 01:26:00.668524] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:20.514 [2024-07-21 01:26:00.668873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88104 ] 00:16:20.514 [2024-07-21 01:26:00.835867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.514 [2024-07-21 01:26:00.898609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.514 01:26:01 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:20.514 01:26:01 ftl -- common/autotest_common.sh@860 -- # return 0 00:16:20.514 01:26:01 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:20.514 01:26:01 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:20.514 01:26:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:20.514 01:26:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:20.514 01:26:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:20.514 01:26:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:20.514 01:26:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@50 -- # break 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@63 -- # break 00:16:20.515 01:26:02 ftl -- ftl/ftl.sh@66 -- # killprocess 88104 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@946 -- # '[' -z 88104 ']' 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@950 -- # kill -0 88104 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@951 -- # uname 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88104 00:16:20.515 killing process with pid 88104 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88104' 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@965 -- # kill 88104 00:16:20.515 01:26:02 ftl -- common/autotest_common.sh@970 -- # wait 88104 00:16:20.515 01:26:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:20.515 01:26:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:20.515 01:26:03 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:20.515 01:26:03 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.515 01:26:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:20.515 ************************************ 00:16:20.515 START TEST ftl_fio_basic 00:16:20.515 ************************************ 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:20.515 * Looking for test storage... 00:16:20.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88217 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88217 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 88217 ']' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:20.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:20.515 01:26:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:20.515 [2024-07-21 01:26:03.789659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:20.515 [2024-07-21 01:26:03.789783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88217 ] 00:16:20.515 [2024-07-21 01:26:03.960750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.515 [2024-07-21 01:26:04.025343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.515 [2024-07-21 01:26:04.025437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.515 [2024-07-21 01:26:04.025546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:20.515 01:26:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:20.515 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:20.515 { 00:16:20.515 "name": "nvme0n1", 00:16:20.515 "aliases": [ 00:16:20.515 "390b7f24-cb6e-4acc-b343-353531920a24" 00:16:20.515 ], 00:16:20.515 "product_name": "NVMe disk", 00:16:20.515 "block_size": 4096, 00:16:20.515 "num_blocks": 1310720, 00:16:20.515 "uuid": "390b7f24-cb6e-4acc-b343-353531920a24", 00:16:20.515 "assigned_rate_limits": { 00:16:20.515 "rw_ios_per_sec": 0, 00:16:20.515 "rw_mbytes_per_sec": 0, 00:16:20.515 "r_mbytes_per_sec": 0, 00:16:20.515 "w_mbytes_per_sec": 0 00:16:20.515 }, 00:16:20.515 "claimed": false, 00:16:20.515 "zoned": false, 00:16:20.515 "supported_io_types": { 00:16:20.515 "read": true, 00:16:20.515 "write": true, 00:16:20.515 "unmap": true, 00:16:20.515 "write_zeroes": true, 00:16:20.515 "flush": true, 00:16:20.515 "reset": true, 00:16:20.515 "compare": true, 00:16:20.515 "compare_and_write": false, 00:16:20.515 "abort": true, 00:16:20.515 "nvme_admin": true, 00:16:20.515 "nvme_io": true 00:16:20.515 }, 00:16:20.515 "driver_specific": { 00:16:20.515 "nvme": [ 00:16:20.515 { 00:16:20.515 "pci_address": "0000:00:11.0", 00:16:20.515 "trid": { 00:16:20.515 "trtype": "PCIe", 00:16:20.515 "traddr": "0000:00:11.0" 00:16:20.515 }, 00:16:20.515 "ctrlr_data": { 00:16:20.516 "cntlid": 0, 00:16:20.516 "vendor_id": "0x1b36", 00:16:20.516 "model_number": "QEMU NVMe Ctrl", 00:16:20.516 "serial_number": "12341", 00:16:20.516 "firmware_revision": "8.0.0", 00:16:20.516 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:20.516 "oacs": { 00:16:20.516 "security": 0, 00:16:20.516 "format": 1, 00:16:20.516 "firmware": 0, 00:16:20.516 "ns_manage": 1 00:16:20.516 }, 00:16:20.516 "multi_ctrlr": false, 00:16:20.516 "ana_reporting": false 00:16:20.516 }, 00:16:20.516 "vs": { 00:16:20.516 "nvme_version": "1.4" 00:16:20.516 }, 00:16:20.516 "ns_data": { 00:16:20.516 "id": 1, 00:16:20.516 "can_share": false 00:16:20.516 } 00:16:20.516 } 00:16:20.516 ], 00:16:20.516 "mp_policy": "active_passive" 00:16:20.516 } 00:16:20.516 } 00:16:20.516 ]' 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fa2b9b33-fb71-4fa6-885c-37b56bab8c45 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fa2b9b33-fb71-4fa6-885c-37b56bab8c45 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:20.516 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:20.772 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:20.772 { 00:16:20.772 "name": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:20.772 "aliases": [ 00:16:20.772 "lvs/nvme0n1p0" 00:16:20.772 ], 00:16:20.772 "product_name": "Logical Volume", 00:16:20.772 "block_size": 4096, 00:16:20.772 "num_blocks": 26476544, 00:16:20.772 "uuid": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:20.772 "assigned_rate_limits": { 00:16:20.772 "rw_ios_per_sec": 0, 00:16:20.772 "rw_mbytes_per_sec": 0, 00:16:20.772 "r_mbytes_per_sec": 0, 00:16:20.772 "w_mbytes_per_sec": 0 00:16:20.772 }, 00:16:20.772 "claimed": false, 00:16:20.772 "zoned": false, 00:16:20.772 "supported_io_types": { 00:16:20.772 "read": true, 00:16:20.772 "write": true, 00:16:20.772 "unmap": true, 00:16:20.772 "write_zeroes": true, 00:16:20.772 "flush": false, 00:16:20.772 "reset": true, 00:16:20.772 "compare": false, 00:16:20.772 "compare_and_write": false, 00:16:20.772 "abort": false, 00:16:20.772 "nvme_admin": false, 00:16:20.772 "nvme_io": false 00:16:20.772 }, 00:16:20.772 "driver_specific": { 00:16:20.772 "lvol": { 00:16:20.772 "lvol_store_uuid": "fa2b9b33-fb71-4fa6-885c-37b56bab8c45", 00:16:20.773 "base_bdev": "nvme0n1", 00:16:20.773 "thin_provision": true, 00:16:20.773 "num_allocated_clusters": 0, 00:16:20.773 "snapshot": false, 00:16:20.773 "clone": false, 00:16:20.773 "esnap_clone": false 00:16:20.773 } 00:16:20.773 } 00:16:20.773 } 00:16:20.773 ]' 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:20.773 01:26:05 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:21.030 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.287 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.287 { 00:16:21.287 "name": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:21.287 "aliases": [ 00:16:21.287 "lvs/nvme0n1p0" 00:16:21.287 ], 00:16:21.287 "product_name": "Logical Volume", 00:16:21.287 "block_size": 4096, 00:16:21.287 "num_blocks": 26476544, 00:16:21.287 "uuid": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:21.287 "assigned_rate_limits": { 00:16:21.287 "rw_ios_per_sec": 0, 00:16:21.287 "rw_mbytes_per_sec": 0, 00:16:21.288 "r_mbytes_per_sec": 0, 00:16:21.288 "w_mbytes_per_sec": 0 00:16:21.288 }, 00:16:21.288 "claimed": false, 00:16:21.288 "zoned": false, 00:16:21.288 "supported_io_types": { 00:16:21.288 "read": true, 00:16:21.288 "write": true, 00:16:21.288 "unmap": true, 00:16:21.288 "write_zeroes": true, 00:16:21.288 "flush": false, 00:16:21.288 "reset": true, 00:16:21.288 "compare": false, 00:16:21.288 "compare_and_write": false, 00:16:21.288 "abort": false, 00:16:21.288 "nvme_admin": false, 00:16:21.288 "nvme_io": false 00:16:21.288 }, 00:16:21.288 "driver_specific": { 00:16:21.288 "lvol": { 00:16:21.288 "lvol_store_uuid": "fa2b9b33-fb71-4fa6-885c-37b56bab8c45", 00:16:21.288 "base_bdev": "nvme0n1", 00:16:21.288 "thin_provision": true, 00:16:21.288 "num_allocated_clusters": 0, 00:16:21.288 "snapshot": false, 00:16:21.288 "clone": false, 00:16:21.288 "esnap_clone": false 00:16:21.288 } 00:16:21.288 } 00:16:21.288 } 00:16:21.288 ]' 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:21.288 01:26:06 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:21.545 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e87284d5-fa92-4307-9bb1-e7c41f34ae3b 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:21.545 { 00:16:21.545 "name": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:21.545 "aliases": [ 00:16:21.545 "lvs/nvme0n1p0" 00:16:21.545 ], 00:16:21.545 "product_name": "Logical Volume", 00:16:21.545 "block_size": 4096, 00:16:21.545 "num_blocks": 26476544, 00:16:21.545 "uuid": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:21.545 "assigned_rate_limits": { 00:16:21.545 "rw_ios_per_sec": 0, 00:16:21.545 "rw_mbytes_per_sec": 0, 00:16:21.545 "r_mbytes_per_sec": 0, 00:16:21.545 "w_mbytes_per_sec": 0 00:16:21.545 }, 00:16:21.545 "claimed": false, 00:16:21.545 "zoned": false, 00:16:21.545 "supported_io_types": { 00:16:21.545 "read": true, 00:16:21.545 "write": true, 00:16:21.545 "unmap": true, 00:16:21.545 "write_zeroes": true, 00:16:21.545 "flush": false, 00:16:21.545 "reset": true, 00:16:21.545 "compare": false, 00:16:21.545 "compare_and_write": false, 00:16:21.545 "abort": false, 00:16:21.545 "nvme_admin": false, 00:16:21.545 "nvme_io": false 00:16:21.545 }, 00:16:21.545 "driver_specific": { 00:16:21.545 "lvol": { 00:16:21.545 "lvol_store_uuid": "fa2b9b33-fb71-4fa6-885c-37b56bab8c45", 00:16:21.545 "base_bdev": "nvme0n1", 00:16:21.545 "thin_provision": true, 00:16:21.545 "num_allocated_clusters": 0, 00:16:21.545 "snapshot": false, 00:16:21.545 "clone": false, 00:16:21.545 "esnap_clone": false 00:16:21.545 } 00:16:21.545 } 00:16:21.545 } 00:16:21.545 ]' 00:16:21.545 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:21.804 01:26:06 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e87284d5-fa92-4307-9bb1-e7c41f34ae3b -c nvc0n1p0 --l2p_dram_limit 60 00:16:21.804 [2024-07-21 01:26:07.073454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.073510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:21.804 [2024-07-21 01:26:07.073543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:21.804 [2024-07-21 01:26:07.073555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.073629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.073646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.804 [2024-07-21 01:26:07.073661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:21.804 [2024-07-21 01:26:07.073672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.073723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:21.804 [2024-07-21 01:26:07.074063] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:21.804 [2024-07-21 01:26:07.074102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.074113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.804 [2024-07-21 01:26:07.074127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:16:21.804 [2024-07-21 01:26:07.074137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.074184] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5579b336-656f-49de-bbaa-4fa84962f53b 00:16:21.804 [2024-07-21 01:26:07.076506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.076542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:21.804 [2024-07-21 01:26:07.076554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:21.804 [2024-07-21 01:26:07.076571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.090074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.090107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.804 [2024-07-21 01:26:07.090120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:16:21.804 [2024-07-21 01:26:07.090149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.090325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.090348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.804 [2024-07-21 01:26:07.090360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:21.804 [2024-07-21 01:26:07.090374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.090457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.090472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:21.804 [2024-07-21 01:26:07.090483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:21.804 [2024-07-21 01:26:07.090496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.090538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.804 [2024-07-21 01:26:07.093273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.093314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.804 [2024-07-21 01:26:07.093329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:16:21.804 [2024-07-21 01:26:07.093339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.093392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.804 [2024-07-21 01:26:07.093408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:21.804 [2024-07-21 01:26:07.093421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:21.804 [2024-07-21 01:26:07.093431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.804 [2024-07-21 01:26:07.093467] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:21.805 [2024-07-21 01:26:07.093661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:21.805 [2024-07-21 01:26:07.093681] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:21.805 [2024-07-21 01:26:07.093696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:21.805 [2024-07-21 01:26:07.093712] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:21.805 [2024-07-21 01:26:07.093726] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:21.805 [2024-07-21 01:26:07.093743] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:21.805 [2024-07-21 01:26:07.093754] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:21.805 [2024-07-21 01:26:07.093768] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:21.805 [2024-07-21 01:26:07.093778] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:21.805 [2024-07-21 01:26:07.093792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.805 [2024-07-21 01:26:07.093802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:21.805 [2024-07-21 01:26:07.093843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:16:21.805 [2024-07-21 01:26:07.093855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.805 [2024-07-21 01:26:07.093966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.805 [2024-07-21 01:26:07.093981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:21.805 [2024-07-21 01:26:07.094004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:16:21.805 [2024-07-21 01:26:07.094014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.805 [2024-07-21 01:26:07.094134] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:21.805 [2024-07-21 01:26:07.094148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:21.805 [2024-07-21 01:26:07.094161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:21.805 [2024-07-21 01:26:07.094193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:21.805 [2024-07-21 01:26:07.094225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.805 [2024-07-21 01:26:07.094246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:21.805 [2024-07-21 01:26:07.094255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:21.805 [2024-07-21 01:26:07.094267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:21.805 [2024-07-21 01:26:07.094276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:21.805 [2024-07-21 01:26:07.094290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:21.805 [2024-07-21 01:26:07.094299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:21.805 [2024-07-21 01:26:07.094319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:21.805 [2024-07-21 01:26:07.094350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:21.805 [2024-07-21 01:26:07.094378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:21.805 [2024-07-21 01:26:07.094415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:21.805 [2024-07-21 01:26:07.094461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:21.805 [2024-07-21 01:26:07.094499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.805 [2024-07-21 01:26:07.094519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:21.805 [2024-07-21 01:26:07.094527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:21.805 [2024-07-21 01:26:07.094539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:21.805 [2024-07-21 01:26:07.094548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:21.805 [2024-07-21 01:26:07.094559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:21.805 [2024-07-21 01:26:07.094569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:21.805 [2024-07-21 01:26:07.094589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:21.805 [2024-07-21 01:26:07.094601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094609] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:21.805 [2024-07-21 01:26:07.094622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:21.805 [2024-07-21 01:26:07.094631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:21.805 [2024-07-21 01:26:07.094660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:21.805 [2024-07-21 01:26:07.094671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:21.805 [2024-07-21 01:26:07.094680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:21.805 [2024-07-21 01:26:07.094692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:21.805 [2024-07-21 01:26:07.094700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:21.805 [2024-07-21 01:26:07.094713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:21.805 [2024-07-21 01:26:07.094728] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:21.805 [2024-07-21 01:26:07.094743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:21.805 [2024-07-21 01:26:07.094770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:21.805 [2024-07-21 01:26:07.094780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:21.805 [2024-07-21 01:26:07.094793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:21.805 [2024-07-21 01:26:07.094804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:21.805 [2024-07-21 01:26:07.094816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:21.805 [2024-07-21 01:26:07.094839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:21.805 [2024-07-21 01:26:07.094856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:21.805 [2024-07-21 01:26:07.094867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:21.805 [2024-07-21 01:26:07.094882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:21.805 [2024-07-21 01:26:07.094938] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:21.805 [2024-07-21 01:26:07.094952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:21.805 [2024-07-21 01:26:07.094977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:21.805 [2024-07-21 01:26:07.094986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:21.805 [2024-07-21 01:26:07.094998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:21.805 [2024-07-21 01:26:07.095008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.805 [2024-07-21 01:26:07.095022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:21.805 [2024-07-21 01:26:07.095032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:16:21.805 [2024-07-21 01:26:07.095047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.805 [2024-07-21 01:26:07.095140] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:21.805 [2024-07-21 01:26:07.095162] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:24.331 [2024-07-21 01:26:09.513391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.513452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:24.331 [2024-07-21 01:26:09.513469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2422.177 ms 00:16:24.331 [2024-07-21 01:26:09.513483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.532312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.532358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.331 [2024-07-21 01:26:09.532374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.730 ms 00:16:24.331 [2024-07-21 01:26:09.532391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.532503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.532521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.331 [2024-07-21 01:26:09.532533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:24.331 [2024-07-21 01:26:09.532546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.558552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.558594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.331 [2024-07-21 01:26:09.558613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.973 ms 00:16:24.331 [2024-07-21 01:26:09.558626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.558672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.558687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.331 [2024-07-21 01:26:09.558698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.331 [2024-07-21 01:26:09.558712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.559497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.559517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.331 [2024-07-21 01:26:09.559528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:16:24.331 [2024-07-21 01:26:09.559546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.559674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.559695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.331 [2024-07-21 01:26:09.559719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:24.331 [2024-07-21 01:26:09.559732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.571830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.571897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.331 [2024-07-21 01:26:09.571911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.086 ms 00:16:24.331 [2024-07-21 01:26:09.571937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.331 [2024-07-21 01:26:09.581017] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.331 [2024-07-21 01:26:09.606572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.331 [2024-07-21 01:26:09.606625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.331 [2024-07-21 01:26:09.606645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.559 ms 00:16:24.331 [2024-07-21 01:26:09.606656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.678129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.678173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:24.589 [2024-07-21 01:26:09.678191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.522 ms 00:16:24.589 [2024-07-21 01:26:09.678217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.678429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.678442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.589 [2024-07-21 01:26:09.678457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:16:24.589 [2024-07-21 01:26:09.678467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.682259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.682298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:24.589 [2024-07-21 01:26:09.682314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:16:24.589 [2024-07-21 01:26:09.682325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.685344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.685376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:24.589 [2024-07-21 01:26:09.685392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:16:24.589 [2024-07-21 01:26:09.685402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.685680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.685696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.589 [2024-07-21 01:26:09.685726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:24.589 [2024-07-21 01:26:09.685736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.735386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.735424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:24.589 [2024-07-21 01:26:09.735454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.691 ms 00:16:24.589 [2024-07-21 01:26:09.735465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.741083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.741115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:24.589 [2024-07-21 01:26:09.741143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.577 ms 00:16:24.589 [2024-07-21 01:26:09.741154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.744409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.744439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:24.589 [2024-07-21 01:26:09.744455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:16:24.589 [2024-07-21 01:26:09.744464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.748464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.748496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.589 [2024-07-21 01:26:09.748511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:16:24.589 [2024-07-21 01:26:09.748520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.748577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.748590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.589 [2024-07-21 01:26:09.748604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.589 [2024-07-21 01:26:09.748614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.748733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.589 [2024-07-21 01:26:09.748748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.589 [2024-07-21 01:26:09.748765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:24.589 [2024-07-21 01:26:09.748775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.589 [2024-07-21 01:26:09.750288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2680.618 ms, result 0 00:16:24.589 { 00:16:24.589 "name": "ftl0", 00:16:24.589 "uuid": "5579b336-656f-49de-bbaa-4fa84962f53b" 00:16:24.589 } 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:24.589 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:24.847 01:26:09 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:24.847 [ 00:16:24.847 { 00:16:24.847 "name": "ftl0", 00:16:24.847 "aliases": [ 00:16:24.847 "5579b336-656f-49de-bbaa-4fa84962f53b" 00:16:24.847 ], 00:16:24.847 "product_name": "FTL disk", 00:16:24.847 "block_size": 4096, 00:16:24.847 "num_blocks": 20971520, 00:16:24.847 "uuid": "5579b336-656f-49de-bbaa-4fa84962f53b", 00:16:24.847 "assigned_rate_limits": { 00:16:24.847 "rw_ios_per_sec": 0, 00:16:24.847 "rw_mbytes_per_sec": 0, 00:16:24.847 "r_mbytes_per_sec": 0, 00:16:24.847 "w_mbytes_per_sec": 0 00:16:24.847 }, 00:16:24.847 "claimed": false, 00:16:24.847 "zoned": false, 00:16:24.847 "supported_io_types": { 00:16:24.847 "read": true, 00:16:24.847 "write": true, 00:16:24.847 "unmap": true, 00:16:24.847 "write_zeroes": true, 00:16:24.847 "flush": true, 00:16:24.847 "reset": false, 00:16:24.847 "compare": false, 00:16:24.847 "compare_and_write": false, 00:16:24.847 "abort": false, 00:16:24.847 "nvme_admin": false, 00:16:24.847 "nvme_io": false 00:16:24.847 }, 00:16:24.847 "driver_specific": { 00:16:24.847 "ftl": { 00:16:24.847 "base_bdev": "e87284d5-fa92-4307-9bb1-e7c41f34ae3b", 00:16:24.847 "cache": "nvc0n1p0" 00:16:24.847 } 00:16:24.847 } 00:16:24.847 } 00:16:24.847 ] 00:16:24.847 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:16:24.847 01:26:10 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:24.847 01:26:10 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:25.105 01:26:10 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:25.105 01:26:10 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:25.365 [2024-07-21 01:26:10.508139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.508196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:25.365 [2024-07-21 01:26:10.508213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:25.365 [2024-07-21 01:26:10.508227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.508263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.365 [2024-07-21 01:26:10.509432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.509457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:25.365 [2024-07-21 01:26:10.509491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:16:25.365 [2024-07-21 01:26:10.509506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.509932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.509965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:25.365 [2024-07-21 01:26:10.509979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:16:25.365 [2024-07-21 01:26:10.509989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.512437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.512457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:25.365 [2024-07-21 01:26:10.512470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:16:25.365 [2024-07-21 01:26:10.512480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.517431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.517485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:25.365 [2024-07-21 01:26:10.517508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.926 ms 00:16:25.365 [2024-07-21 01:26:10.517546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.519156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.519190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:25.365 [2024-07-21 01:26:10.519211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:16:25.365 [2024-07-21 01:26:10.519221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.525096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.525132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:25.365 [2024-07-21 01:26:10.525152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.841 ms 00:16:25.365 [2024-07-21 01:26:10.525163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.525320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.525334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:25.365 [2024-07-21 01:26:10.525347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:25.365 [2024-07-21 01:26:10.525357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.527564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.527596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:25.365 [2024-07-21 01:26:10.527611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:16:25.365 [2024-07-21 01:26:10.527620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.529293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.529324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:25.365 [2024-07-21 01:26:10.529342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:16:25.365 [2024-07-21 01:26:10.529352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.530688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.530718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:25.365 [2024-07-21 01:26:10.530732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:16:25.365 [2024-07-21 01:26:10.530741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.531912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.365 [2024-07-21 01:26:10.531940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:25.365 [2024-07-21 01:26:10.531961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:16:25.365 [2024-07-21 01:26:10.531972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.365 [2024-07-21 01:26:10.532015] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:25.365 [2024-07-21 01:26:10.532032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:25.365 [2024-07-21 01:26:10.532462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.532816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.533988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:25.366 [2024-07-21 01:26:10.534432] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:25.366 [2024-07-21 01:26:10.534452] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5579b336-656f-49de-bbaa-4fa84962f53b 00:16:25.366 [2024-07-21 01:26:10.534467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:25.366 [2024-07-21 01:26:10.534481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:25.366 [2024-07-21 01:26:10.534491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:25.366 [2024-07-21 01:26:10.534529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:25.366 [2024-07-21 01:26:10.534539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:25.366 [2024-07-21 01:26:10.534553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:25.366 [2024-07-21 01:26:10.534564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:25.366 [2024-07-21 01:26:10.534577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:25.366 [2024-07-21 01:26:10.534587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:25.366 [2024-07-21 01:26:10.534601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.366 [2024-07-21 01:26:10.534612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:25.366 [2024-07-21 01:26:10.534627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:16:25.366 [2024-07-21 01:26:10.534638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.537531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.366 [2024-07-21 01:26:10.537642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:25.366 [2024-07-21 01:26:10.537725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:16:25.366 [2024-07-21 01:26:10.537762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.537997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.366 [2024-07-21 01:26:10.538042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:25.366 [2024-07-21 01:26:10.538128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:25.366 [2024-07-21 01:26:10.538164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.548532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.366 [2024-07-21 01:26:10.548681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.366 [2024-07-21 01:26:10.548780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.366 [2024-07-21 01:26:10.548818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.548922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.366 [2024-07-21 01:26:10.548962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.366 [2024-07-21 01:26:10.549092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.366 [2024-07-21 01:26:10.549131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.549287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.366 [2024-07-21 01:26:10.549424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.366 [2024-07-21 01:26:10.549474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.366 [2024-07-21 01:26:10.549506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.366 [2024-07-21 01:26:10.549563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.366 [2024-07-21 01:26:10.549597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.367 [2024-07-21 01:26:10.549693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.549773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.569282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.569472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.367 [2024-07-21 01:26:10.569585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.569625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.582192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.582362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.367 [2024-07-21 01:26:10.582449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.582486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.582621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.582751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.367 [2024-07-21 01:26:10.582880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.582921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.583047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.583108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.367 [2024-07-21 01:26:10.583186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.583263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.583411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.583457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.367 [2024-07-21 01:26:10.583566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.583605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.583701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.583811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.367 [2024-07-21 01:26:10.583868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.583901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.584040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.584127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.367 [2024-07-21 01:26:10.584224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.584293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.584395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.367 [2024-07-21 01:26:10.584472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.367 [2024-07-21 01:26:10.584527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.367 [2024-07-21 01:26:10.584576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.367 [2024-07-21 01:26:10.584838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.740 ms, result 0 00:16:25.367 true 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88217 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 88217 ']' 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 88217 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88217 00:16:25.367 killing process with pid 88217 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88217' 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 88217 00:16:25.367 01:26:10 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 88217 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:28.649 01:26:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:28.649 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:28.649 fio-3.35 00:16:28.649 Starting 1 thread 00:16:33.978 00:16:33.978 test: (groupid=0, jobs=1): err= 0: pid=88385: Sun Jul 21 01:26:18 2024 00:16:33.978 read: IOPS=919, BW=61.1MiB/s (64.0MB/s)(255MiB/4168msec) 00:16:33.978 slat (nsec): min=4129, max=37956, avg=5830.90, stdev=2402.48 00:16:33.978 clat (usec): min=331, max=871, avg=481.32, stdev=51.76 00:16:33.978 lat (usec): min=336, max=879, avg=487.15, stdev=52.09 00:16:33.978 clat percentiles (usec): 00:16:33.978 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 437], 20.00th=[ 445], 00:16:33.978 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 494], 60.00th=[ 506], 00:16:33.978 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 553], 00:16:33.978 | 99.00th=[ 627], 99.50th=[ 701], 99.90th=[ 824], 99.95th=[ 840], 00:16:33.978 | 99.99th=[ 873] 00:16:33.978 write: IOPS=926, BW=61.5MiB/s (64.5MB/s)(256MiB/4163msec); 0 zone resets 00:16:33.978 slat (nsec): min=14285, max=98834, avg=26549.56, stdev=5241.38 00:16:33.978 clat (usec): min=377, max=1024, avg=557.89, stdev=60.53 00:16:33.978 lat (usec): min=419, max=1058, avg=584.44, stdev=60.91 00:16:33.978 clat percentiles (usec): 00:16:33.978 | 1.00th=[ 449], 5.00th=[ 465], 10.00th=[ 510], 20.00th=[ 519], 00:16:33.978 | 30.00th=[ 529], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:16:33.978 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 603], 95.00th=[ 611], 00:16:33.978 | 99.00th=[ 832], 99.50th=[ 898], 99.90th=[ 971], 99.95th=[ 1012], 00:16:33.978 | 99.99th=[ 1029] 00:16:33.978 bw ( KiB/s): min=61336, max=64600, per=100.00%, avg=63104.00, stdev=1167.66, samples=8 00:16:33.978 iops : min= 902, max= 950, avg=928.00, stdev=17.17, samples=8 00:16:33.978 lat (usec) : 500=30.82%, 750=68.24%, 1000=0.91% 00:16:33.978 lat (msec) : 2=0.03% 00:16:33.978 cpu : usr=99.16%, sys=0.22%, ctx=6, majf=0, minf=1181 00:16:33.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:33.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.978 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:33.978 00:16:33.978 Run status group 0 (all jobs): 00:16:33.978 READ: bw=61.1MiB/s (64.0MB/s), 61.1MiB/s-61.1MiB/s (64.0MB/s-64.0MB/s), io=255MiB (267MB), run=4168-4168msec 00:16:33.978 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=256MiB (269MB), run=4163-4163msec 00:16:34.236 ----------------------------------------------------- 00:16:34.236 Suppressions used: 00:16:34.236 count bytes template 00:16:34.236 1 5 /usr/src/fio/parse.c 00:16:34.236 1 8 libtcmalloc_minimal.so 00:16:34.236 1 904 libcrypto.so 00:16:34.236 ----------------------------------------------------- 00:16:34.236 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.236 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:34.237 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.495 01:26:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:34.495 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:34.495 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:34.495 fio-3.35 00:16:34.495 Starting 2 threads 00:17:01.024 00:17:01.024 first_half: (groupid=0, jobs=1): err= 0: pid=88472: Sun Jul 21 01:26:43 2024 00:17:01.024 read: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(255MiB/22609msec) 00:17:01.024 slat (nsec): min=3300, max=31537, avg=5602.40, stdev=1639.89 00:17:01.024 clat (usec): min=865, max=249578, avg=35634.03, stdev=17745.98 00:17:01.024 lat (usec): min=870, max=249583, avg=35639.64, stdev=17746.16 00:17:01.024 clat percentiles (msec): 00:17:01.024 | 1.00th=[ 15], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:17:01.024 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:17:01.024 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 54], 00:17:01.024 | 99.00th=[ 134], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 190], 00:17:01.024 | 99.99th=[ 243] 00:17:01.024 write: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(256MiB/19825msec); 0 zone resets 00:17:01.024 slat (usec): min=4, max=644, avg= 7.75, stdev= 7.12 00:17:01.024 clat (usec): min=395, max=90449, avg=8621.10, stdev=14804.06 00:17:01.024 lat (usec): min=402, max=90468, avg=8628.84, stdev=14804.19 00:17:01.024 clat percentiles (usec): 00:17:01.024 | 1.00th=[ 947], 5.00th=[ 1221], 10.00th=[ 1434], 20.00th=[ 1926], 00:17:01.024 | 30.00th=[ 3326], 40.00th=[ 4490], 50.00th=[ 4948], 60.00th=[ 5735], 00:17:01.024 | 70.00th=[ 6325], 80.00th=[ 9241], 90.00th=[12256], 95.00th=[28181], 00:17:01.024 | 99.00th=[77071], 99.50th=[79168], 99.90th=[81265], 99.95th=[83362], 00:17:01.024 | 99.99th=[89654] 00:17:01.024 bw ( KiB/s): min= 912, max=41616, per=97.89%, avg=24962.57, stdev=12494.84, samples=21 00:17:01.024 iops : min= 228, max=10404, avg=6240.62, stdev=3123.69, samples=21 00:17:01.025 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.68% 00:17:01.025 lat (msec) : 2=9.81%, 4=7.15%, 10=23.91%, 20=6.41%, 50=47.00% 00:17:01.025 lat (msec) : 100=3.73%, 250=1.24% 00:17:01.025 cpu : usr=99.26%, sys=0.23%, ctx=40, majf=0, minf=5607 00:17:01.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:01.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.025 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.025 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.025 second_half: (groupid=0, jobs=1): err= 0: pid=88473: Sun Jul 21 01:26:43 2024 00:17:01.025 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(255MiB/22759msec) 00:17:01.025 slat (nsec): min=3301, max=44001, avg=5915.21, stdev=2574.27 00:17:01.025 clat (usec): min=693, max=278639, avg=35144.75, stdev=20258.20 00:17:01.025 lat (usec): min=700, max=278647, avg=35150.66, stdev=20258.43 00:17:01.025 clat percentiles (msec): 00:17:01.025 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:17:01.025 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:17:01.025 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 52], 00:17:01.025 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 228], 99.95th=[ 253], 00:17:01.025 | 99.99th=[ 275] 00:17:01.025 write: IOPS=3187, BW=12.5MiB/s (13.1MB/s)(256MiB/20560msec); 0 zone resets 00:17:01.025 slat (usec): min=4, max=812, avg= 8.09, stdev= 5.98 00:17:01.025 clat (usec): min=401, max=90611, avg=9363.51, stdev=15727.57 00:17:01.025 lat (usec): min=408, max=90619, avg=9371.60, stdev=15727.83 00:17:01.025 clat percentiles (usec): 00:17:01.025 | 1.00th=[ 889], 5.00th=[ 1123], 10.00th=[ 1303], 20.00th=[ 1713], 00:17:01.025 | 30.00th=[ 3163], 40.00th=[ 4113], 50.00th=[ 4752], 60.00th=[ 5735], 00:17:01.025 | 70.00th=[ 6390], 80.00th=[ 9896], 90.00th=[14484], 95.00th=[38011], 00:17:01.025 | 99.00th=[78119], 99.50th=[79168], 99.90th=[84411], 99.95th=[88605], 00:17:01.025 | 99.99th=[90702] 00:17:01.025 bw ( KiB/s): min= 208, max=42400, per=93.44%, avg=23827.55, stdev=13141.94, samples=22 00:17:01.025 iops : min= 52, max=10600, avg=5956.86, stdev=3285.45, samples=22 00:17:01.025 lat (usec) : 500=0.01%, 750=0.14%, 1000=1.10% 00:17:01.025 lat (msec) : 2=10.42%, 4=8.23%, 10=22.23%, 20=5.24%, 50=47.69% 00:17:01.025 lat (msec) : 100=3.61%, 250=1.32%, 500=0.03% 00:17:01.025 cpu : usr=99.31%, sys=0.14%, ctx=42, majf=0, minf=5529 00:17:01.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:01.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.025 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.025 issued rwts: total=65379,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.025 00:17:01.025 Run status group 0 (all jobs): 00:17:01.025 READ: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=510MiB (535MB), run=22609-22759msec 00:17:01.025 WRITE: bw=24.9MiB/s (26.1MB/s), 12.5MiB/s-12.9MiB/s (13.1MB/s-13.5MB/s), io=512MiB (537MB), run=19825-20560msec 00:17:01.025 ----------------------------------------------------- 00:17:01.025 Suppressions used: 00:17:01.025 count bytes template 00:17:01.025 2 10 /usr/src/fio/parse.c 00:17:01.025 3 288 /usr/src/fio/iolog.c 00:17:01.025 1 8 libtcmalloc_minimal.so 00:17:01.025 1 904 libcrypto.so 00:17:01.025 ----------------------------------------------------- 00:17:01.025 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:01.025 01:26:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:01.025 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:01.025 fio-3.35 00:17:01.025 Starting 1 thread 00:17:15.900 00:17:15.900 test: (groupid=0, jobs=1): err= 0: pid=88764: Sun Jul 21 01:26:59 2024 00:17:15.900 read: IOPS=8147, BW=31.8MiB/s (33.4MB/s)(255MiB/8003msec) 00:17:15.900 slat (nsec): min=3296, max=29463, avg=4708.20, stdev=1228.11 00:17:15.900 clat (usec): min=581, max=31958, avg=15702.88, stdev=757.31 00:17:15.900 lat (usec): min=585, max=31962, avg=15707.59, stdev=757.25 00:17:15.900 clat percentiles (usec): 00:17:15.900 | 1.00th=[14877], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:17:15.900 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:17:15.900 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16188], 95.00th=[16450], 00:17:15.900 | 99.00th=[18220], 99.50th=[18482], 99.90th=[23987], 99.95th=[27657], 00:17:15.900 | 99.99th=[31327] 00:17:15.900 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(256MiB/5210msec); 0 zone resets 00:17:15.900 slat (usec): min=4, max=1991, avg= 7.08, stdev=10.62 00:17:15.900 clat (usec): min=537, max=55858, avg=10129.63, stdev=11127.97 00:17:15.900 lat (usec): min=544, max=55864, avg=10136.71, stdev=11127.95 00:17:15.900 clat percentiles (usec): 00:17:15.900 | 1.00th=[ 848], 5.00th=[ 979], 10.00th=[ 1074], 20.00th=[ 1221], 00:17:15.900 | 30.00th=[ 1401], 40.00th=[ 1778], 50.00th=[ 6259], 60.00th=[ 9503], 00:17:15.900 | 70.00th=[11994], 80.00th=[15139], 90.00th=[31851], 95.00th=[34341], 00:17:15.900 | 99.00th=[36963], 99.50th=[39060], 99.90th=[50070], 99.95th=[52167], 00:17:15.900 | 99.99th=[54789] 00:17:15.900 bw ( KiB/s): min=18904, max=65936, per=94.73%, avg=47662.55, stdev=12468.53, samples=11 00:17:15.900 iops : min= 4726, max=16484, avg=11915.64, stdev=3117.13, samples=11 00:17:15.900 lat (usec) : 750=0.13%, 1000=2.83% 00:17:15.900 lat (msec) : 2=17.57%, 4=0.65%, 10=9.72%, 20=61.02%, 50=8.04% 00:17:15.900 lat (msec) : 100=0.05% 00:17:15.900 cpu : usr=98.96%, sys=0.36%, ctx=19, majf=0, minf=5577 00:17:15.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:15.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.900 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:15.900 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:15.900 00:17:15.900 Run status group 0 (all jobs): 00:17:15.900 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=8003-8003msec 00:17:15.900 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=256MiB (268MB), run=5210-5210msec 00:17:15.900 ----------------------------------------------------- 00:17:15.900 Suppressions used: 00:17:15.900 count bytes template 00:17:15.900 1 5 /usr/src/fio/parse.c 00:17:15.900 2 192 /usr/src/fio/iolog.c 00:17:15.900 1 8 libtcmalloc_minimal.so 00:17:15.900 1 904 libcrypto.so 00:17:15.900 ----------------------------------------------------- 00:17:15.900 00:17:15.900 01:26:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:15.900 01:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.900 01:26:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:15.900 Remove shared memory files 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid74234 /dev/shm/spdk_tgt_trace.pid87181 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:15.900 ************************************ 00:17:15.900 END TEST ftl_fio_basic 00:17:15.900 ************************************ 00:17:15.900 00:17:15.900 real 0m56.549s 00:17:15.900 user 2m2.782s 00:17:15.900 sys 0m3.990s 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:15.900 01:27:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 01:27:00 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:15.900 01:27:00 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:15.900 01:27:00 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:15.900 01:27:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 ************************************ 00:17:15.900 START TEST ftl_bdevperf 00:17:15.900 ************************************ 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:15.900 * Looking for test storage... 00:17:15.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=88991 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 88991 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 88991 ']' 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.900 01:27:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 [2024-07-21 01:27:00.422572] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:15.900 [2024-07-21 01:27:00.422900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88991 ] 00:17:15.900 [2024-07-21 01:27:00.594379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.900 [2024-07-21 01:27:00.659175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:15.900 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:15.901 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:16.467 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:16.467 { 00:17:16.467 "name": "nvme0n1", 00:17:16.467 "aliases": [ 00:17:16.467 "7351d324-6e02-41a1-bebc-0767ba1f8244" 00:17:16.467 ], 00:17:16.467 "product_name": "NVMe disk", 00:17:16.467 "block_size": 4096, 00:17:16.467 "num_blocks": 1310720, 00:17:16.467 "uuid": "7351d324-6e02-41a1-bebc-0767ba1f8244", 00:17:16.467 "assigned_rate_limits": { 00:17:16.467 "rw_ios_per_sec": 0, 00:17:16.467 "rw_mbytes_per_sec": 0, 00:17:16.467 "r_mbytes_per_sec": 0, 00:17:16.467 "w_mbytes_per_sec": 0 00:17:16.467 }, 00:17:16.467 "claimed": true, 00:17:16.467 "claim_type": "read_many_write_one", 00:17:16.467 "zoned": false, 00:17:16.467 "supported_io_types": { 00:17:16.467 "read": true, 00:17:16.467 "write": true, 00:17:16.467 "unmap": true, 00:17:16.467 "write_zeroes": true, 00:17:16.467 "flush": true, 00:17:16.467 "reset": true, 00:17:16.467 "compare": true, 00:17:16.467 "compare_and_write": false, 00:17:16.467 "abort": true, 00:17:16.467 "nvme_admin": true, 00:17:16.467 "nvme_io": true 00:17:16.467 }, 00:17:16.467 "driver_specific": { 00:17:16.467 "nvme": [ 00:17:16.467 { 00:17:16.467 "pci_address": "0000:00:11.0", 00:17:16.467 "trid": { 00:17:16.467 "trtype": "PCIe", 00:17:16.467 "traddr": "0000:00:11.0" 00:17:16.467 }, 00:17:16.468 "ctrlr_data": { 00:17:16.468 "cntlid": 0, 00:17:16.468 "vendor_id": "0x1b36", 00:17:16.468 "model_number": "QEMU NVMe Ctrl", 00:17:16.468 "serial_number": "12341", 00:17:16.468 "firmware_revision": "8.0.0", 00:17:16.468 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:16.468 "oacs": { 00:17:16.468 "security": 0, 00:17:16.468 "format": 1, 00:17:16.468 "firmware": 0, 00:17:16.468 "ns_manage": 1 00:17:16.468 }, 00:17:16.468 "multi_ctrlr": false, 00:17:16.468 "ana_reporting": false 00:17:16.468 }, 00:17:16.468 "vs": { 00:17:16.468 "nvme_version": "1.4" 00:17:16.468 }, 00:17:16.468 "ns_data": { 00:17:16.468 "id": 1, 00:17:16.468 "can_share": false 00:17:16.468 } 00:17:16.468 } 00:17:16.468 ], 00:17:16.468 "mp_policy": "active_passive" 00:17:16.468 } 00:17:16.468 } 00:17:16.468 ]' 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:16.468 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:16.726 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fa2b9b33-fb71-4fa6-885c-37b56bab8c45 00:17:16.726 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:16.726 01:27:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa2b9b33-fb71-4fa6-885c-37b56bab8c45 00:17:16.984 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=39257a95-f7d0-4333-a96b-b0caeed3d4df 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 39257a95-f7d0-4333-a96b-b0caeed3d4df 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=228c9380-e189-4047-a597-6963b880508b 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 228c9380-e189-4047-a597-6963b880508b 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=228c9380-e189-4047-a597-6963b880508b 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 228c9380-e189-4047-a597-6963b880508b 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=228c9380-e189-4047-a597-6963b880508b 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:17.243 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 228c9380-e189-4047-a597-6963b880508b 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:17.501 { 00:17:17.501 "name": "228c9380-e189-4047-a597-6963b880508b", 00:17:17.501 "aliases": [ 00:17:17.501 "lvs/nvme0n1p0" 00:17:17.501 ], 00:17:17.501 "product_name": "Logical Volume", 00:17:17.501 "block_size": 4096, 00:17:17.501 "num_blocks": 26476544, 00:17:17.501 "uuid": "228c9380-e189-4047-a597-6963b880508b", 00:17:17.501 "assigned_rate_limits": { 00:17:17.501 "rw_ios_per_sec": 0, 00:17:17.501 "rw_mbytes_per_sec": 0, 00:17:17.501 "r_mbytes_per_sec": 0, 00:17:17.501 "w_mbytes_per_sec": 0 00:17:17.501 }, 00:17:17.501 "claimed": false, 00:17:17.501 "zoned": false, 00:17:17.501 "supported_io_types": { 00:17:17.501 "read": true, 00:17:17.501 "write": true, 00:17:17.501 "unmap": true, 00:17:17.501 "write_zeroes": true, 00:17:17.501 "flush": false, 00:17:17.501 "reset": true, 00:17:17.501 "compare": false, 00:17:17.501 "compare_and_write": false, 00:17:17.501 "abort": false, 00:17:17.501 "nvme_admin": false, 00:17:17.501 "nvme_io": false 00:17:17.501 }, 00:17:17.501 "driver_specific": { 00:17:17.501 "lvol": { 00:17:17.501 "lvol_store_uuid": "39257a95-f7d0-4333-a96b-b0caeed3d4df", 00:17:17.501 "base_bdev": "nvme0n1", 00:17:17.501 "thin_provision": true, 00:17:17.501 "num_allocated_clusters": 0, 00:17:17.501 "snapshot": false, 00:17:17.501 "clone": false, 00:17:17.501 "esnap_clone": false 00:17:17.501 } 00:17:17.501 } 00:17:17.501 } 00:17:17.501 ]' 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:17.501 01:27:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 228c9380-e189-4047-a597-6963b880508b 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=228c9380-e189-4047-a597-6963b880508b 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:17.759 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 228c9380-e189-4047-a597-6963b880508b 00:17:18.017 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:18.017 { 00:17:18.018 "name": "228c9380-e189-4047-a597-6963b880508b", 00:17:18.018 "aliases": [ 00:17:18.018 "lvs/nvme0n1p0" 00:17:18.018 ], 00:17:18.018 "product_name": "Logical Volume", 00:17:18.018 "block_size": 4096, 00:17:18.018 "num_blocks": 26476544, 00:17:18.018 "uuid": "228c9380-e189-4047-a597-6963b880508b", 00:17:18.018 "assigned_rate_limits": { 00:17:18.018 "rw_ios_per_sec": 0, 00:17:18.018 "rw_mbytes_per_sec": 0, 00:17:18.018 "r_mbytes_per_sec": 0, 00:17:18.018 "w_mbytes_per_sec": 0 00:17:18.018 }, 00:17:18.018 "claimed": false, 00:17:18.018 "zoned": false, 00:17:18.018 "supported_io_types": { 00:17:18.018 "read": true, 00:17:18.018 "write": true, 00:17:18.018 "unmap": true, 00:17:18.018 "write_zeroes": true, 00:17:18.018 "flush": false, 00:17:18.018 "reset": true, 00:17:18.018 "compare": false, 00:17:18.018 "compare_and_write": false, 00:17:18.018 "abort": false, 00:17:18.018 "nvme_admin": false, 00:17:18.018 "nvme_io": false 00:17:18.018 }, 00:17:18.018 "driver_specific": { 00:17:18.018 "lvol": { 00:17:18.018 "lvol_store_uuid": "39257a95-f7d0-4333-a96b-b0caeed3d4df", 00:17:18.018 "base_bdev": "nvme0n1", 00:17:18.018 "thin_provision": true, 00:17:18.018 "num_allocated_clusters": 0, 00:17:18.018 "snapshot": false, 00:17:18.018 "clone": false, 00:17:18.018 "esnap_clone": false 00:17:18.018 } 00:17:18.018 } 00:17:18.018 } 00:17:18.018 ]' 00:17:18.018 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:18.018 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:18.018 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 228c9380-e189-4047-a597-6963b880508b 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=228c9380-e189-4047-a597-6963b880508b 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:17:18.276 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 228c9380-e189-4047-a597-6963b880508b 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:18.535 { 00:17:18.535 "name": "228c9380-e189-4047-a597-6963b880508b", 00:17:18.535 "aliases": [ 00:17:18.535 "lvs/nvme0n1p0" 00:17:18.535 ], 00:17:18.535 "product_name": "Logical Volume", 00:17:18.535 "block_size": 4096, 00:17:18.535 "num_blocks": 26476544, 00:17:18.535 "uuid": "228c9380-e189-4047-a597-6963b880508b", 00:17:18.535 "assigned_rate_limits": { 00:17:18.535 "rw_ios_per_sec": 0, 00:17:18.535 "rw_mbytes_per_sec": 0, 00:17:18.535 "r_mbytes_per_sec": 0, 00:17:18.535 "w_mbytes_per_sec": 0 00:17:18.535 }, 00:17:18.535 "claimed": false, 00:17:18.535 "zoned": false, 00:17:18.535 "supported_io_types": { 00:17:18.535 "read": true, 00:17:18.535 "write": true, 00:17:18.535 "unmap": true, 00:17:18.535 "write_zeroes": true, 00:17:18.535 "flush": false, 00:17:18.535 "reset": true, 00:17:18.535 "compare": false, 00:17:18.535 "compare_and_write": false, 00:17:18.535 "abort": false, 00:17:18.535 "nvme_admin": false, 00:17:18.535 "nvme_io": false 00:17:18.535 }, 00:17:18.535 "driver_specific": { 00:17:18.535 "lvol": { 00:17:18.535 "lvol_store_uuid": "39257a95-f7d0-4333-a96b-b0caeed3d4df", 00:17:18.535 "base_bdev": "nvme0n1", 00:17:18.535 "thin_provision": true, 00:17:18.535 "num_allocated_clusters": 0, 00:17:18.535 "snapshot": false, 00:17:18.535 "clone": false, 00:17:18.535 "esnap_clone": false 00:17:18.535 } 00:17:18.535 } 00:17:18.535 } 00:17:18.535 ]' 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:18.535 01:27:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 228c9380-e189-4047-a597-6963b880508b -c nvc0n1p0 --l2p_dram_limit 20 00:17:18.794 [2024-07-21 01:27:03.957019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.794 [2024-07-21 01:27:03.957078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.794 [2024-07-21 01:27:03.957097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:18.794 [2024-07-21 01:27:03.957111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.794 [2024-07-21 01:27:03.957185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.794 [2024-07-21 01:27:03.957201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.794 [2024-07-21 01:27:03.957216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:18.794 [2024-07-21 01:27:03.957233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.794 [2024-07-21 01:27:03.957262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.794 [2024-07-21 01:27:03.957591] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.794 [2024-07-21 01:27:03.957614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.957631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.795 [2024-07-21 01:27:03.957642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:17:18.795 [2024-07-21 01:27:03.957656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.957730] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2ed0e8f4-ce31-4d2e-a1c8-b98c3e35ce3d 00:17:18.795 [2024-07-21 01:27:03.960166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.960194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:18.795 [2024-07-21 01:27:03.960212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:18.795 [2024-07-21 01:27:03.960222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.974003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.974033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.795 [2024-07-21 01:27:03.974050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.763 ms 00:17:18.795 [2024-07-21 01:27:03.974061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.974161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.974176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.795 [2024-07-21 01:27:03.974190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:18.795 [2024-07-21 01:27:03.974204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.974282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.974301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.795 [2024-07-21 01:27:03.974323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:18.795 [2024-07-21 01:27:03.974333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.974363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.795 [2024-07-21 01:27:03.977123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.977157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.795 [2024-07-21 01:27:03.977170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:17:18.795 [2024-07-21 01:27:03.977183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.977226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.977241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.795 [2024-07-21 01:27:03.977252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:18.795 [2024-07-21 01:27:03.977279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.977297] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:18.795 [2024-07-21 01:27:03.977445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:18.795 [2024-07-21 01:27:03.977460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.795 [2024-07-21 01:27:03.977477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:18.795 [2024-07-21 01:27:03.977491] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.795 [2024-07-21 01:27:03.977508] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.795 [2024-07-21 01:27:03.977520] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:18.795 [2024-07-21 01:27:03.977537] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.795 [2024-07-21 01:27:03.977547] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:18.795 [2024-07-21 01:27:03.977562] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:18.795 [2024-07-21 01:27:03.977572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.977587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.795 [2024-07-21 01:27:03.977599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:18.795 [2024-07-21 01:27:03.977613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.977681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.795 [2024-07-21 01:27:03.977708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.795 [2024-07-21 01:27:03.977718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:18.795 [2024-07-21 01:27:03.977734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.795 [2024-07-21 01:27:03.977842] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.795 [2024-07-21 01:27:03.977860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.795 [2024-07-21 01:27:03.977872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.795 [2024-07-21 01:27:03.977887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.977899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.795 [2024-07-21 01:27:03.977912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.977921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:18.795 [2024-07-21 01:27:03.977934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.795 [2024-07-21 01:27:03.977944] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:18.795 [2024-07-21 01:27:03.977956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.795 [2024-07-21 01:27:03.977975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.795 [2024-07-21 01:27:03.977989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:18.795 [2024-07-21 01:27:03.977998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.795 [2024-07-21 01:27:03.978017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.795 [2024-07-21 01:27:03.978028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:18.795 [2024-07-21 01:27:03.978044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.795 [2024-07-21 01:27:03.978066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.795 [2024-07-21 01:27:03.978099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.795 [2024-07-21 01:27:03.978134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.795 [2024-07-21 01:27:03.978164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.795 [2024-07-21 01:27:03.978215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.795 [2024-07-21 01:27:03.978244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.795 [2024-07-21 01:27:03.978265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.795 [2024-07-21 01:27:03.978277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:18.795 [2024-07-21 01:27:03.978286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.795 [2024-07-21 01:27:03.978298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:18.795 [2024-07-21 01:27:03.978307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:18.795 [2024-07-21 01:27:03.978319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:18.795 [2024-07-21 01:27:03.978340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:18.795 [2024-07-21 01:27:03.978349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978361] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.795 [2024-07-21 01:27:03.978379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.795 [2024-07-21 01:27:03.978400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.795 [2024-07-21 01:27:03.978425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.795 [2024-07-21 01:27:03.978434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.795 [2024-07-21 01:27:03.978446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.795 [2024-07-21 01:27:03.978456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.795 [2024-07-21 01:27:03.978468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.795 [2024-07-21 01:27:03.978477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.795 [2024-07-21 01:27:03.978494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.795 [2024-07-21 01:27:03.978510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.795 [2024-07-21 01:27:03.978525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:18.795 [2024-07-21 01:27:03.978538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:18.795 [2024-07-21 01:27:03.978551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:18.795 [2024-07-21 01:27:03.978561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:18.795 [2024-07-21 01:27:03.978574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:18.795 [2024-07-21 01:27:03.978585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:18.795 [2024-07-21 01:27:03.978605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:18.795 [2024-07-21 01:27:03.978616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:18.796 [2024-07-21 01:27:03.978629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:18.796 [2024-07-21 01:27:03.978639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:18.796 [2024-07-21 01:27:03.978701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.796 [2024-07-21 01:27:03.978712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.796 [2024-07-21 01:27:03.978735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.796 [2024-07-21 01:27:03.978748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.796 [2024-07-21 01:27:03.978758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.796 [2024-07-21 01:27:03.978772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.796 [2024-07-21 01:27:03.978791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.796 [2024-07-21 01:27:03.978811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:17:18.796 [2024-07-21 01:27:03.978821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.796 [2024-07-21 01:27:03.978893] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:18.796 [2024-07-21 01:27:03.978909] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:22.985 [2024-07-21 01:27:07.628067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.628147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:22.985 [2024-07-21 01:27:07.628172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3655.073 ms 00:17:22.985 [2024-07-21 01:27:07.628183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.659718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.659864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.985 [2024-07-21 01:27:07.659924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.447 ms 00:17:22.985 [2024-07-21 01:27:07.659960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.660251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.660313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:22.985 [2024-07-21 01:27:07.660358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:22.985 [2024-07-21 01:27:07.660391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.682326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.682392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.985 [2024-07-21 01:27:07.682423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.815 ms 00:17:22.985 [2024-07-21 01:27:07.682465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.682520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.682549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.985 [2024-07-21 01:27:07.682574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:22.985 [2024-07-21 01:27:07.682594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.683507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.683544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.985 [2024-07-21 01:27:07.683564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:17:22.985 [2024-07-21 01:27:07.683578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.683732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.683750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.985 [2024-07-21 01:27:07.683770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:22.985 [2024-07-21 01:27:07.683783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.694208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.694241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.985 [2024-07-21 01:27:07.694259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.413 ms 00:17:22.985 [2024-07-21 01:27:07.694269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.703273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:22.985 [2024-07-21 01:27:07.712509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.712551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.985 [2024-07-21 01:27:07.712563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.185 ms 00:17:22.985 [2024-07-21 01:27:07.712577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.796890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.796937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:22.985 [2024-07-21 01:27:07.796956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.424 ms 00:17:22.985 [2024-07-21 01:27:07.796974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.797160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.797178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.985 [2024-07-21 01:27:07.797199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:17:22.985 [2024-07-21 01:27:07.797213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.985 [2024-07-21 01:27:07.801156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.985 [2024-07-21 01:27:07.801195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:22.985 [2024-07-21 01:27:07.801212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:17:22.985 [2024-07-21 01:27:07.801226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.804263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.804299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:22.986 [2024-07-21 01:27:07.804311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:17:22.986 [2024-07-21 01:27:07.804323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.804588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.804605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:22.986 [2024-07-21 01:27:07.804616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:22.986 [2024-07-21 01:27:07.804633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.854844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.854883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:22.986 [2024-07-21 01:27:07.854900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.266 ms 00:17:22.986 [2024-07-21 01:27:07.854913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.860550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.860596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:22.986 [2024-07-21 01:27:07.860608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.614 ms 00:17:22.986 [2024-07-21 01:27:07.860629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.863896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.863932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:22.986 [2024-07-21 01:27:07.863945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:17:22.986 [2024-07-21 01:27:07.863957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.867818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.867885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.986 [2024-07-21 01:27:07.867898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:17:22.986 [2024-07-21 01:27:07.867915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.867954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.867969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.986 [2024-07-21 01:27:07.867980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:22.986 [2024-07-21 01:27:07.867994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.868063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.986 [2024-07-21 01:27:07.868087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.986 [2024-07-21 01:27:07.868097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:22.986 [2024-07-21 01:27:07.868114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.986 [2024-07-21 01:27:07.869490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3918.363 ms, result 0 00:17:22.986 { 00:17:22.986 "name": "ftl0", 00:17:22.986 "uuid": "2ed0e8f4-ce31-4d2e-a1c8-b98c3e35ce3d" 00:17:22.986 } 00:17:22.986 01:27:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:22.986 01:27:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:22.986 01:27:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:22.986 01:27:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:22.986 [2024-07-21 01:27:08.171245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:22.986 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:22.986 Zero copy mechanism will not be used. 00:17:22.986 Running I/O for 4 seconds... 00:17:27.177 00:17:27.177 Latency(us) 00:17:27.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.177 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:27.177 ftl0 : 4.00 1465.42 97.31 0.00 0.00 716.39 199.04 2421.41 00:17:27.177 =================================================================================================================== 00:17:27.177 Total : 1465.42 97.31 0.00 0.00 716.39 199.04 2421.41 00:17:27.177 0 00:17:27.177 [2024-07-21 01:27:12.171525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:27.177 01:27:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:27.177 [2024-07-21 01:27:12.278790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:27.177 Running I/O for 4 seconds... 00:17:31.358 00:17:31.358 Latency(us) 00:17:31.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.358 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.358 ftl0 : 4.01 11553.88 45.13 0.00 0.00 11058.14 207.27 32846.96 00:17:31.358 =================================================================================================================== 00:17:31.358 Total : 11553.88 45.13 0.00 0.00 11058.14 0.00 32846.96 00:17:31.358 [2024-07-21 01:27:16.290951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:31.358 0 00:17:31.358 01:27:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:31.358 [2024-07-21 01:27:16.404306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:31.358 Running I/O for 4 seconds... 00:17:35.586 00:17:35.586 Latency(us) 00:17:35.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.586 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:35.586 Verification LBA range: start 0x0 length 0x1400000 00:17:35.586 ftl0 : 4.01 9492.93 37.08 0.00 0.00 13443.35 250.04 29688.60 00:17:35.586 =================================================================================================================== 00:17:35.586 Total : 9492.93 37.08 0.00 0.00 13443.35 0.00 29688.60 00:17:35.586 0 00:17:35.586 [2024-07-21 01:27:20.413460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:35.586 01:27:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:35.586 [2024-07-21 01:27:20.589464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.586 [2024-07-21 01:27:20.589646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.586 [2024-07-21 01:27:20.589760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.586 [2024-07-21 01:27:20.589806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.586 [2024-07-21 01:27:20.589870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.586 [2024-07-21 01:27:20.591009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.586 [2024-07-21 01:27:20.591138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.586 [2024-07-21 01:27:20.591221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:17:35.587 [2024-07-21 01:27:20.591256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.593214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.593350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.587 [2024-07-21 01:27:20.593435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:17:35.587 [2024-07-21 01:27:20.593473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.823673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.823839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.587 [2024-07-21 01:27:20.823933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 230.518 ms 00:17:35.587 [2024-07-21 01:27:20.823972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.829032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.829160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.587 [2024-07-21 01:27:20.829185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:17:35.587 [2024-07-21 01:27:20.829196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.831098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.831132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.587 [2024-07-21 01:27:20.831148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:17:35.587 [2024-07-21 01:27:20.831158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.836901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.836937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.587 [2024-07-21 01:27:20.836954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.716 ms 00:17:35.587 [2024-07-21 01:27:20.836965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.837082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.837094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.587 [2024-07-21 01:27:20.837108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:35.587 [2024-07-21 01:27:20.837127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.839466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.839501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:35.587 [2024-07-21 01:27:20.839516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:17:35.587 [2024-07-21 01:27:20.839526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.841278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.841310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:35.587 [2024-07-21 01:27:20.841326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:17:35.587 [2024-07-21 01:27:20.841335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.842613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.842647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.587 [2024-07-21 01:27:20.842663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:17:35.587 [2024-07-21 01:27:20.842672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.843912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.587 [2024-07-21 01:27:20.843942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.587 [2024-07-21 01:27:20.843957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:17:35.587 [2024-07-21 01:27:20.843967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.587 [2024-07-21 01:27:20.843998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.587 [2024-07-21 01:27:20.844016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.587 [2024-07-21 01:27:20.844836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.845978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.588 [2024-07-21 01:27:20.846426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.588 [2024-07-21 01:27:20.846447] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2ed0e8f4-ce31-4d2e-a1c8-b98c3e35ce3d 00:17:35.588 [2024-07-21 01:27:20.846458] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.588 [2024-07-21 01:27:20.846471] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.588 [2024-07-21 01:27:20.846481] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.588 [2024-07-21 01:27:20.846494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.588 [2024-07-21 01:27:20.846503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.588 [2024-07-21 01:27:20.846520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.588 [2024-07-21 01:27:20.846530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.588 [2024-07-21 01:27:20.846543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.588 [2024-07-21 01:27:20.846551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.588 [2024-07-21 01:27:20.846564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.588 [2024-07-21 01:27:20.846578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.588 [2024-07-21 01:27:20.846592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:17:35.588 [2024-07-21 01:27:20.846602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.849171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.588 [2024-07-21 01:27:20.849194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.588 [2024-07-21 01:27:20.849208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:17:35.588 [2024-07-21 01:27:20.849218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.849406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.588 [2024-07-21 01:27:20.849418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.588 [2024-07-21 01:27:20.849440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:35.588 [2024-07-21 01:27:20.849451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.859296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.859324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.588 [2024-07-21 01:27:20.859344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.859353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.859408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.859418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.588 [2024-07-21 01:27:20.859431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.859440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.859526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.859539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.588 [2024-07-21 01:27:20.859552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.859567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.859586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.859599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.588 [2024-07-21 01:27:20.859612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.859628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.880688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.880730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.588 [2024-07-21 01:27:20.880748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.880767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.893758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.893798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.588 [2024-07-21 01:27:20.893814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.893836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.893945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.893958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.588 [2024-07-21 01:27:20.893973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.893984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.894037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.894049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.588 [2024-07-21 01:27:20.894077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.894087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.894188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.894209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.588 [2024-07-21 01:27:20.894224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.894235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.894280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.894292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.588 [2024-07-21 01:27:20.894306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.894319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.894371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.894383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.588 [2024-07-21 01:27:20.894397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.588 [2024-07-21 01:27:20.894407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.588 [2024-07-21 01:27:20.894465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.588 [2024-07-21 01:27:20.894476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.589 [2024-07-21 01:27:20.894494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.589 [2024-07-21 01:27:20.894504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.848 [2024-07-21 01:27:20.894660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 305.634 ms, result 0 00:17:35.848 true 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 88991 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 88991 ']' 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 88991 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88991 00:17:35.848 killing process with pid 88991 00:17:35.848 Received shutdown signal, test time was about 4.000000 seconds 00:17:35.848 00:17:35.848 Latency(us) 00:17:35.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.848 =================================================================================================================== 00:17:35.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88991' 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 88991 00:17:35.848 01:27:20 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 88991 00:17:37.754 01:27:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:37.754 01:27:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:37.754 01:27:23 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.754 01:27:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:38.013 Remove shared memory files 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:38.013 ************************************ 00:17:38.013 END TEST ftl_bdevperf 00:17:38.013 ************************************ 00:17:38.013 00:17:38.013 real 0m22.981s 00:17:38.013 user 0m25.057s 00:17:38.013 sys 0m1.372s 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.013 01:27:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:38.013 01:27:23 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:38.013 01:27:23 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:38.013 01:27:23 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.013 01:27:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:38.013 ************************************ 00:17:38.013 START TEST ftl_trim 00:17:38.013 ************************************ 00:17:38.013 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:38.273 * Looking for test storage... 00:17:38.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89346 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:38.273 01:27:23 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89346 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89346 ']' 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.273 01:27:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:38.273 [2024-07-21 01:27:23.482467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:38.273 [2024-07-21 01:27:23.483208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89346 ] 00:17:38.532 [2024-07-21 01:27:23.654498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.532 [2024-07-21 01:27:23.721108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.532 [2024-07-21 01:27:23.721209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.532 [2024-07-21 01:27:23.721307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.098 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.098 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:39.098 01:27:24 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:39.356 01:27:24 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:39.356 01:27:24 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:39.356 01:27:24 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:39.356 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:39.356 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:39.356 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:39.356 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:39.356 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:39.614 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:39.614 { 00:17:39.614 "name": "nvme0n1", 00:17:39.614 "aliases": [ 00:17:39.614 "e7374dc6-ba50-432a-8521-ec2a691a762a" 00:17:39.614 ], 00:17:39.614 "product_name": "NVMe disk", 00:17:39.614 "block_size": 4096, 00:17:39.614 "num_blocks": 1310720, 00:17:39.614 "uuid": "e7374dc6-ba50-432a-8521-ec2a691a762a", 00:17:39.614 "assigned_rate_limits": { 00:17:39.614 "rw_ios_per_sec": 0, 00:17:39.614 "rw_mbytes_per_sec": 0, 00:17:39.614 "r_mbytes_per_sec": 0, 00:17:39.614 "w_mbytes_per_sec": 0 00:17:39.614 }, 00:17:39.614 "claimed": true, 00:17:39.614 "claim_type": "read_many_write_one", 00:17:39.614 "zoned": false, 00:17:39.614 "supported_io_types": { 00:17:39.614 "read": true, 00:17:39.614 "write": true, 00:17:39.614 "unmap": true, 00:17:39.614 "write_zeroes": true, 00:17:39.614 "flush": true, 00:17:39.614 "reset": true, 00:17:39.614 "compare": true, 00:17:39.614 "compare_and_write": false, 00:17:39.614 "abort": true, 00:17:39.614 "nvme_admin": true, 00:17:39.614 "nvme_io": true 00:17:39.614 }, 00:17:39.614 "driver_specific": { 00:17:39.614 "nvme": [ 00:17:39.614 { 00:17:39.614 "pci_address": "0000:00:11.0", 00:17:39.614 "trid": { 00:17:39.614 "trtype": "PCIe", 00:17:39.614 "traddr": "0000:00:11.0" 00:17:39.614 }, 00:17:39.614 "ctrlr_data": { 00:17:39.614 "cntlid": 0, 00:17:39.614 "vendor_id": "0x1b36", 00:17:39.614 "model_number": "QEMU NVMe Ctrl", 00:17:39.614 "serial_number": "12341", 00:17:39.614 "firmware_revision": "8.0.0", 00:17:39.614 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:39.614 "oacs": { 00:17:39.614 "security": 0, 00:17:39.614 "format": 1, 00:17:39.614 "firmware": 0, 00:17:39.614 "ns_manage": 1 00:17:39.614 }, 00:17:39.614 "multi_ctrlr": false, 00:17:39.614 "ana_reporting": false 00:17:39.614 }, 00:17:39.614 "vs": { 00:17:39.614 "nvme_version": "1.4" 00:17:39.614 }, 00:17:39.614 "ns_data": { 00:17:39.614 "id": 1, 00:17:39.614 "can_share": false 00:17:39.614 } 00:17:39.614 } 00:17:39.614 ], 00:17:39.614 "mp_policy": "active_passive" 00:17:39.614 } 00:17:39.614 } 00:17:39.614 ]' 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:39.615 01:27:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:17:39.615 01:27:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:39.615 01:27:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:39.615 01:27:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:39.615 01:27:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:39.615 01:27:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:39.872 01:27:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=39257a95-f7d0-4333-a96b-b0caeed3d4df 00:17:39.872 01:27:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:39.872 01:27:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39257a95-f7d0-4333-a96b-b0caeed3d4df 00:17:40.130 01:27:25 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:40.130 01:27:25 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=84088371-5326-4209-8e8f-80ab816b5246 00:17:40.130 01:27:25 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 84088371-5326-4209-8e8f-80ab816b5246 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:40.388 01:27:25 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.388 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.388 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:40.388 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:40.388 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:40.388 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.645 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:40.645 { 00:17:40.645 "name": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:40.645 "aliases": [ 00:17:40.645 "lvs/nvme0n1p0" 00:17:40.645 ], 00:17:40.645 "product_name": "Logical Volume", 00:17:40.645 "block_size": 4096, 00:17:40.645 "num_blocks": 26476544, 00:17:40.645 "uuid": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:40.646 "assigned_rate_limits": { 00:17:40.646 "rw_ios_per_sec": 0, 00:17:40.646 "rw_mbytes_per_sec": 0, 00:17:40.646 "r_mbytes_per_sec": 0, 00:17:40.646 "w_mbytes_per_sec": 0 00:17:40.646 }, 00:17:40.646 "claimed": false, 00:17:40.646 "zoned": false, 00:17:40.646 "supported_io_types": { 00:17:40.646 "read": true, 00:17:40.646 "write": true, 00:17:40.646 "unmap": true, 00:17:40.646 "write_zeroes": true, 00:17:40.646 "flush": false, 00:17:40.646 "reset": true, 00:17:40.646 "compare": false, 00:17:40.646 "compare_and_write": false, 00:17:40.646 "abort": false, 00:17:40.646 "nvme_admin": false, 00:17:40.646 "nvme_io": false 00:17:40.646 }, 00:17:40.646 "driver_specific": { 00:17:40.646 "lvol": { 00:17:40.646 "lvol_store_uuid": "84088371-5326-4209-8e8f-80ab816b5246", 00:17:40.646 "base_bdev": "nvme0n1", 00:17:40.646 "thin_provision": true, 00:17:40.646 "num_allocated_clusters": 0, 00:17:40.646 "snapshot": false, 00:17:40.646 "clone": false, 00:17:40.646 "esnap_clone": false 00:17:40.646 } 00:17:40.646 } 00:17:40.646 } 00:17:40.646 ]' 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:40.646 01:27:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:40.646 01:27:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:40.646 01:27:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:40.646 01:27:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:40.903 01:27:26 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:40.903 01:27:26 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:40.903 01:27:26 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.903 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=19068d99-eea6-4b02-9788-a6a556868fb9 00:17:40.903 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:40.903 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:40.903 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:40.903 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:41.161 { 00:17:41.161 "name": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:41.161 "aliases": [ 00:17:41.161 "lvs/nvme0n1p0" 00:17:41.161 ], 00:17:41.161 "product_name": "Logical Volume", 00:17:41.161 "block_size": 4096, 00:17:41.161 "num_blocks": 26476544, 00:17:41.161 "uuid": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:41.161 "assigned_rate_limits": { 00:17:41.161 "rw_ios_per_sec": 0, 00:17:41.161 "rw_mbytes_per_sec": 0, 00:17:41.161 "r_mbytes_per_sec": 0, 00:17:41.161 "w_mbytes_per_sec": 0 00:17:41.161 }, 00:17:41.161 "claimed": false, 00:17:41.161 "zoned": false, 00:17:41.161 "supported_io_types": { 00:17:41.161 "read": true, 00:17:41.161 "write": true, 00:17:41.161 "unmap": true, 00:17:41.161 "write_zeroes": true, 00:17:41.161 "flush": false, 00:17:41.161 "reset": true, 00:17:41.161 "compare": false, 00:17:41.161 "compare_and_write": false, 00:17:41.161 "abort": false, 00:17:41.161 "nvme_admin": false, 00:17:41.161 "nvme_io": false 00:17:41.161 }, 00:17:41.161 "driver_specific": { 00:17:41.161 "lvol": { 00:17:41.161 "lvol_store_uuid": "84088371-5326-4209-8e8f-80ab816b5246", 00:17:41.161 "base_bdev": "nvme0n1", 00:17:41.161 "thin_provision": true, 00:17:41.161 "num_allocated_clusters": 0, 00:17:41.161 "snapshot": false, 00:17:41.161 "clone": false, 00:17:41.161 "esnap_clone": false 00:17:41.161 } 00:17:41.161 } 00:17:41.161 } 00:17:41.161 ]' 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:41.161 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:41.161 01:27:26 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:41.161 01:27:26 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:41.419 01:27:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:41.419 01:27:26 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:41.419 01:27:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:41.419 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=19068d99-eea6-4b02-9788-a6a556868fb9 00:17:41.419 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:41.419 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:41.419 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:41.419 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19068d99-eea6-4b02-9788-a6a556868fb9 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:41.677 { 00:17:41.677 "name": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:41.677 "aliases": [ 00:17:41.677 "lvs/nvme0n1p0" 00:17:41.677 ], 00:17:41.677 "product_name": "Logical Volume", 00:17:41.677 "block_size": 4096, 00:17:41.677 "num_blocks": 26476544, 00:17:41.677 "uuid": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:41.677 "assigned_rate_limits": { 00:17:41.677 "rw_ios_per_sec": 0, 00:17:41.677 "rw_mbytes_per_sec": 0, 00:17:41.677 "r_mbytes_per_sec": 0, 00:17:41.677 "w_mbytes_per_sec": 0 00:17:41.677 }, 00:17:41.677 "claimed": false, 00:17:41.677 "zoned": false, 00:17:41.677 "supported_io_types": { 00:17:41.677 "read": true, 00:17:41.677 "write": true, 00:17:41.677 "unmap": true, 00:17:41.677 "write_zeroes": true, 00:17:41.677 "flush": false, 00:17:41.677 "reset": true, 00:17:41.677 "compare": false, 00:17:41.677 "compare_and_write": false, 00:17:41.677 "abort": false, 00:17:41.677 "nvme_admin": false, 00:17:41.677 "nvme_io": false 00:17:41.677 }, 00:17:41.677 "driver_specific": { 00:17:41.677 "lvol": { 00:17:41.677 "lvol_store_uuid": "84088371-5326-4209-8e8f-80ab816b5246", 00:17:41.677 "base_bdev": "nvme0n1", 00:17:41.677 "thin_provision": true, 00:17:41.677 "num_allocated_clusters": 0, 00:17:41.677 "snapshot": false, 00:17:41.677 "clone": false, 00:17:41.677 "esnap_clone": false 00:17:41.677 } 00:17:41.677 } 00:17:41.677 } 00:17:41.677 ]' 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:41.677 01:27:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:41.677 01:27:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:41.677 01:27:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 19068d99-eea6-4b02-9788-a6a556868fb9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:41.936 [2024-07-21 01:27:26.997329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:26.997532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.936 [2024-07-21 01:27:26.997628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.936 [2024-07-21 01:27:26.997668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.000908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.000945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.936 [2024-07-21 01:27:27.000962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:17:41.936 [2024-07-21 01:27:27.000988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.001155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.936 [2024-07-21 01:27:27.001410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.936 [2024-07-21 01:27:27.001434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.001447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.936 [2024-07-21 01:27:27.001463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:41.936 [2024-07-21 01:27:27.001477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.001589] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fb64116f-0918-47a1-8bee-b098e1f13b9f 00:17:41.936 [2024-07-21 01:27:27.003962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.003997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.936 [2024-07-21 01:27:27.004011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:41.936 [2024-07-21 01:27:27.004026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.017568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.017603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.936 [2024-07-21 01:27:27.017617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.469 ms 00:17:41.936 [2024-07-21 01:27:27.017632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.017851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.017879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.936 [2024-07-21 01:27:27.017891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:41.936 [2024-07-21 01:27:27.017906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.017959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.017975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.936 [2024-07-21 01:27:27.017988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.936 [2024-07-21 01:27:27.018017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.018062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.936 [2024-07-21 01:27:27.020880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.020909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.936 [2024-07-21 01:27:27.020925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:17:41.936 [2024-07-21 01:27:27.020939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.020998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.021028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.936 [2024-07-21 01:27:27.021044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:41.936 [2024-07-21 01:27:27.021070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.021119] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.936 [2024-07-21 01:27:27.021271] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.936 [2024-07-21 01:27:27.021291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.936 [2024-07-21 01:27:27.021318] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:41.936 [2024-07-21 01:27:27.021336] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.936 [2024-07-21 01:27:27.021350] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.936 [2024-07-21 01:27:27.021365] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.936 [2024-07-21 01:27:27.021380] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.936 [2024-07-21 01:27:27.021395] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.936 [2024-07-21 01:27:27.021406] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.936 [2024-07-21 01:27:27.021420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.021434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.936 [2024-07-21 01:27:27.021449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:17:41.936 [2024-07-21 01:27:27.021461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.936 [2024-07-21 01:27:27.021560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.936 [2024-07-21 01:27:27.021572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.937 [2024-07-21 01:27:27.021590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:41.937 [2024-07-21 01:27:27.021601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.937 [2024-07-21 01:27:27.021745] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.937 [2024-07-21 01:27:27.021759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.937 [2024-07-21 01:27:27.021793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.937 [2024-07-21 01:27:27.021805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.021831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.937 [2024-07-21 01:27:27.021855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.021869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.937 [2024-07-21 01:27:27.021879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.937 [2024-07-21 01:27:27.021892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.937 [2024-07-21 01:27:27.021901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.937 [2024-07-21 01:27:27.021914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.937 [2024-07-21 01:27:27.021925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.937 [2024-07-21 01:27:27.021938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.937 [2024-07-21 01:27:27.021948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.937 [2024-07-21 01:27:27.021965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.937 [2024-07-21 01:27:27.021976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.021990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.937 [2024-07-21 01:27:27.022000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.937 [2024-07-21 01:27:27.022036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.937 [2024-07-21 01:27:27.022069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.937 [2024-07-21 01:27:27.022104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.937 [2024-07-21 01:27:27.022137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.937 [2024-07-21 01:27:27.022179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.937 [2024-07-21 01:27:27.022217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.937 [2024-07-21 01:27:27.022227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.937 [2024-07-21 01:27:27.022240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.937 [2024-07-21 01:27:27.022251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.937 [2024-07-21 01:27:27.022265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.937 [2024-07-21 01:27:27.022274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.937 [2024-07-21 01:27:27.022297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.937 [2024-07-21 01:27:27.022309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022318] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.937 [2024-07-21 01:27:27.022333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.937 [2024-07-21 01:27:27.022344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.937 [2024-07-21 01:27:27.022372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.937 [2024-07-21 01:27:27.022391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.937 [2024-07-21 01:27:27.022401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.937 [2024-07-21 01:27:27.022414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.937 [2024-07-21 01:27:27.022424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.937 [2024-07-21 01:27:27.022437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.937 [2024-07-21 01:27:27.022455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.937 [2024-07-21 01:27:27.022472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.937 [2024-07-21 01:27:27.022500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.937 [2024-07-21 01:27:27.022511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.937 [2024-07-21 01:27:27.022526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.937 [2024-07-21 01:27:27.022537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.937 [2024-07-21 01:27:27.022551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.937 [2024-07-21 01:27:27.022562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.937 [2024-07-21 01:27:27.022579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.937 [2024-07-21 01:27:27.022590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.937 [2024-07-21 01:27:27.022603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.937 [2024-07-21 01:27:27.022666] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.937 [2024-07-21 01:27:27.022682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.937 [2024-07-21 01:27:27.022712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.937 [2024-07-21 01:27:27.022724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.937 [2024-07-21 01:27:27.022738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.937 [2024-07-21 01:27:27.022749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.937 [2024-07-21 01:27:27.022764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.937 [2024-07-21 01:27:27.022775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:17:41.937 [2024-07-21 01:27:27.022793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.937 [2024-07-21 01:27:27.022909] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:41.937 [2024-07-21 01:27:27.022931] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:46.128 [2024-07-21 01:27:30.979645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:30.979715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:46.128 [2024-07-21 01:27:30.979733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3963.159 ms 00:17:46.128 [2024-07-21 01:27:30.979753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:30.998898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:30.998989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.128 [2024-07-21 01:27:30.999007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.019 ms 00:17:46.128 [2024-07-21 01:27:30.999021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:30.999172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:30.999194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.128 [2024-07-21 01:27:30.999207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:46.128 [2024-07-21 01:27:30.999239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.027441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.027493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.128 [2024-07-21 01:27:31.027512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.213 ms 00:17:46.128 [2024-07-21 01:27:31.027530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.027633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.027655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.128 [2024-07-21 01:27:31.027674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.128 [2024-07-21 01:27:31.027691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.028478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.028517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.128 [2024-07-21 01:27:31.028532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:17:46.128 [2024-07-21 01:27:31.028549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.028724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.028749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.128 [2024-07-21 01:27:31.028765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:46.128 [2024-07-21 01:27:31.028785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.041079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.041122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.128 [2024-07-21 01:27:31.041140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.257 ms 00:17:46.128 [2024-07-21 01:27:31.041157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.050436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.128 [2024-07-21 01:27:31.075868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.075912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.128 [2024-07-21 01:27:31.075931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.629 ms 00:17:46.128 [2024-07-21 01:27:31.075942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.170411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.170469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:46.128 [2024-07-21 01:27:31.170488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.503 ms 00:17:46.128 [2024-07-21 01:27:31.170499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.170769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.170784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.128 [2024-07-21 01:27:31.170799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:46.128 [2024-07-21 01:27:31.170810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.174872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.174906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:46.128 [2024-07-21 01:27:31.174923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.000 ms 00:17:46.128 [2024-07-21 01:27:31.174934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.177950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.177981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:46.128 [2024-07-21 01:27:31.177999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.966 ms 00:17:46.128 [2024-07-21 01:27:31.178009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.178314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.178329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.128 [2024-07-21 01:27:31.178359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:46.128 [2024-07-21 01:27:31.178369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.226072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.128 [2024-07-21 01:27:31.226112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:46.128 [2024-07-21 01:27:31.226131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.710 ms 00:17:46.128 [2024-07-21 01:27:31.226142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.128 [2024-07-21 01:27:31.231807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.129 [2024-07-21 01:27:31.231852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:46.129 [2024-07-21 01:27:31.231869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.620 ms 00:17:46.129 [2024-07-21 01:27:31.231880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.129 [2024-07-21 01:27:31.235179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.129 [2024-07-21 01:27:31.235208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:46.129 [2024-07-21 01:27:31.235224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:17:46.129 [2024-07-21 01:27:31.235233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.129 [2024-07-21 01:27:31.239174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.129 [2024-07-21 01:27:31.239205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.129 [2024-07-21 01:27:31.239221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.880 ms 00:17:46.129 [2024-07-21 01:27:31.239231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.129 [2024-07-21 01:27:31.239322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.129 [2024-07-21 01:27:31.239351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.129 [2024-07-21 01:27:31.239366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:46.129 [2024-07-21 01:27:31.239377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.129 [2024-07-21 01:27:31.239470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.129 [2024-07-21 01:27:31.239483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.129 [2024-07-21 01:27:31.239497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:46.129 [2024-07-21 01:27:31.239507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.129 [2024-07-21 01:27:31.240803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.129 [2024-07-21 01:27:31.241813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4250.051 ms, result 0 00:17:46.129 [2024-07-21 01:27:31.242733] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.129 { 00:17:46.129 "name": "ftl0", 00:17:46.129 "uuid": "fb64116f-0918-47a1-8bee-b098e1f13b9f" 00:17:46.129 } 00:17:46.129 01:27:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:46.129 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:46.388 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:46.388 [ 00:17:46.388 { 00:17:46.388 "name": "ftl0", 00:17:46.388 "aliases": [ 00:17:46.388 "fb64116f-0918-47a1-8bee-b098e1f13b9f" 00:17:46.388 ], 00:17:46.388 "product_name": "FTL disk", 00:17:46.388 "block_size": 4096, 00:17:46.388 "num_blocks": 23592960, 00:17:46.388 "uuid": "fb64116f-0918-47a1-8bee-b098e1f13b9f", 00:17:46.388 "assigned_rate_limits": { 00:17:46.388 "rw_ios_per_sec": 0, 00:17:46.388 "rw_mbytes_per_sec": 0, 00:17:46.388 "r_mbytes_per_sec": 0, 00:17:46.388 "w_mbytes_per_sec": 0 00:17:46.388 }, 00:17:46.388 "claimed": false, 00:17:46.388 "zoned": false, 00:17:46.388 "supported_io_types": { 00:17:46.388 "read": true, 00:17:46.388 "write": true, 00:17:46.388 "unmap": true, 00:17:46.388 "write_zeroes": true, 00:17:46.388 "flush": true, 00:17:46.388 "reset": false, 00:17:46.388 "compare": false, 00:17:46.388 "compare_and_write": false, 00:17:46.388 "abort": false, 00:17:46.388 "nvme_admin": false, 00:17:46.388 "nvme_io": false 00:17:46.388 }, 00:17:46.388 "driver_specific": { 00:17:46.388 "ftl": { 00:17:46.388 "base_bdev": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:46.388 "cache": "nvc0n1p0" 00:17:46.388 } 00:17:46.388 } 00:17:46.388 } 00:17:46.388 ] 00:17:46.388 01:27:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:17:46.388 01:27:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:46.388 01:27:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:46.647 01:27:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:46.647 01:27:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:46.906 01:27:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:46.906 { 00:17:46.906 "name": "ftl0", 00:17:46.906 "aliases": [ 00:17:46.906 "fb64116f-0918-47a1-8bee-b098e1f13b9f" 00:17:46.906 ], 00:17:46.906 "product_name": "FTL disk", 00:17:46.906 "block_size": 4096, 00:17:46.906 "num_blocks": 23592960, 00:17:46.906 "uuid": "fb64116f-0918-47a1-8bee-b098e1f13b9f", 00:17:46.906 "assigned_rate_limits": { 00:17:46.906 "rw_ios_per_sec": 0, 00:17:46.906 "rw_mbytes_per_sec": 0, 00:17:46.906 "r_mbytes_per_sec": 0, 00:17:46.906 "w_mbytes_per_sec": 0 00:17:46.906 }, 00:17:46.906 "claimed": false, 00:17:46.906 "zoned": false, 00:17:46.906 "supported_io_types": { 00:17:46.906 "read": true, 00:17:46.906 "write": true, 00:17:46.906 "unmap": true, 00:17:46.906 "write_zeroes": true, 00:17:46.906 "flush": true, 00:17:46.906 "reset": false, 00:17:46.906 "compare": false, 00:17:46.906 "compare_and_write": false, 00:17:46.906 "abort": false, 00:17:46.906 "nvme_admin": false, 00:17:46.906 "nvme_io": false 00:17:46.906 }, 00:17:46.906 "driver_specific": { 00:17:46.906 "ftl": { 00:17:46.906 "base_bdev": "19068d99-eea6-4b02-9788-a6a556868fb9", 00:17:46.906 "cache": "nvc0n1p0" 00:17:46.906 } 00:17:46.906 } 00:17:46.906 } 00:17:46.906 ]' 00:17:46.906 01:27:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:46.906 01:27:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:46.906 01:27:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:46.906 [2024-07-21 01:27:32.199275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.199420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:46.906 [2024-07-21 01:27:32.199563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:46.906 [2024-07-21 01:27:32.199585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.906 [2024-07-21 01:27:32.199629] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:46.906 [2024-07-21 01:27:32.200766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.200794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:46.906 [2024-07-21 01:27:32.200809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:17:46.906 [2024-07-21 01:27:32.200820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.906 [2024-07-21 01:27:32.201318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.201339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:46.906 [2024-07-21 01:27:32.201354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:17:46.906 [2024-07-21 01:27:32.201365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.906 [2024-07-21 01:27:32.204190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.204213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:46.906 [2024-07-21 01:27:32.204227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:17:46.906 [2024-07-21 01:27:32.204237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.906 [2024-07-21 01:27:32.210044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.210077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:46.906 [2024-07-21 01:27:32.210096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.761 ms 00:17:46.906 [2024-07-21 01:27:32.210106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.906 [2024-07-21 01:27:32.211805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.906 [2024-07-21 01:27:32.211849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:46.906 [2024-07-21 01:27:32.211865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:17:46.907 [2024-07-21 01:27:32.211875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.217544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.217581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:47.166 [2024-07-21 01:27:32.217598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.627 ms 00:17:47.166 [2024-07-21 01:27:32.217622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.217870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.217884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:47.166 [2024-07-21 01:27:32.217898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:17:47.166 [2024-07-21 01:27:32.217909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.220174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.220205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:47.166 [2024-07-21 01:27:32.220219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:17:47.166 [2024-07-21 01:27:32.220229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.221943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.222063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:47.166 [2024-07-21 01:27:32.222142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:17:47.166 [2024-07-21 01:27:32.222216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.223691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.223811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:47.166 [2024-07-21 01:27:32.223901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:17:47.166 [2024-07-21 01:27:32.223936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.225316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.166 [2024-07-21 01:27:32.225434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:47.166 [2024-07-21 01:27:32.225509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:17:47.166 [2024-07-21 01:27:32.225524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.166 [2024-07-21 01:27:32.225574] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:47.166 [2024-07-21 01:27:32.225591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.225998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:47.166 [2024-07-21 01:27:32.226479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:47.167 [2024-07-21 01:27:32.226940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:47.167 [2024-07-21 01:27:32.226954] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:17:47.167 [2024-07-21 01:27:32.226965] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:47.167 [2024-07-21 01:27:32.226977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:47.167 [2024-07-21 01:27:32.226987] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:47.167 [2024-07-21 01:27:32.227004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:47.167 [2024-07-21 01:27:32.227014] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:47.167 [2024-07-21 01:27:32.227028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:47.167 [2024-07-21 01:27:32.227037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:47.167 [2024-07-21 01:27:32.227049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:47.167 [2024-07-21 01:27:32.227058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:47.167 [2024-07-21 01:27:32.227071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.167 [2024-07-21 01:27:32.227081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:47.167 [2024-07-21 01:27:32.227095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:17:47.167 [2024-07-21 01:27:32.227105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.230148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.167 [2024-07-21 01:27:32.230174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:47.167 [2024-07-21 01:27:32.230189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:17:47.167 [2024-07-21 01:27:32.230211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.230390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.167 [2024-07-21 01:27:32.230403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:47.167 [2024-07-21 01:27:32.230428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:47.167 [2024-07-21 01:27:32.230438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.240821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.240861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.167 [2024-07-21 01:27:32.240877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.240910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.241014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.241026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.167 [2024-07-21 01:27:32.241040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.241050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.241128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.241141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.167 [2024-07-21 01:27:32.241159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.241169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.241207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.241218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.167 [2024-07-21 01:27:32.241232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.241243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.261379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.261431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.167 [2024-07-21 01:27:32.261448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.261477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.167 [2024-07-21 01:27:32.274152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.167 [2024-07-21 01:27:32.274280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.167 [2024-07-21 01:27:32.274381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.167 [2024-07-21 01:27:32.274533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:47.167 [2024-07-21 01:27:32.274637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.167 [2024-07-21 01:27:32.274774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.274870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.167 [2024-07-21 01:27:32.274883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.167 [2024-07-21 01:27:32.274897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.167 [2024-07-21 01:27:32.274907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.167 [2024-07-21 01:27:32.275141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.944 ms, result 0 00:17:47.167 true 00:17:47.167 01:27:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89346 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89346 ']' 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89346 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89346 00:17:47.167 killing process with pid 89346 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89346' 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89346 00:17:47.167 01:27:32 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89346 00:17:50.479 01:27:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:51.417 65536+0 records in 00:17:51.417 65536+0 records out 00:17:51.417 268435456 bytes (268 MB, 256 MiB) copied, 0.943514 s, 285 MB/s 00:17:51.417 01:27:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:51.417 [2024-07-21 01:27:36.634771] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:51.417 [2024-07-21 01:27:36.634918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89528 ] 00:17:51.675 [2024-07-21 01:27:36.803215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.675 [2024-07-21 01:27:36.866113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.935 [2024-07-21 01:27:37.011228] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.935 [2024-07-21 01:27:37.011312] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.935 [2024-07-21 01:27:37.165619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.165676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.935 [2024-07-21 01:27:37.165692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:51.935 [2024-07-21 01:27:37.165702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.168365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.168402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.935 [2024-07-21 01:27:37.168414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:17:51.935 [2024-07-21 01:27:37.168423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.168521] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.935 [2024-07-21 01:27:37.168766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.935 [2024-07-21 01:27:37.168785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.168795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.935 [2024-07-21 01:27:37.168810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:51.935 [2024-07-21 01:27:37.168842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.171216] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.935 [2024-07-21 01:27:37.174644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.174679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.935 [2024-07-21 01:27:37.174716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:17:51.935 [2024-07-21 01:27:37.174726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.174802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.174815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.935 [2024-07-21 01:27:37.174848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:51.935 [2024-07-21 01:27:37.174860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.186912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.186941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.935 [2024-07-21 01:27:37.186953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.020 ms 00:17:51.935 [2024-07-21 01:27:37.186963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.187099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.187113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.935 [2024-07-21 01:27:37.187125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:51.935 [2024-07-21 01:27:37.187138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.187173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.187184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.935 [2024-07-21 01:27:37.187201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:51.935 [2024-07-21 01:27:37.187211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.187233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.935 [2024-07-21 01:27:37.189874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.189901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.935 [2024-07-21 01:27:37.189926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:17:51.935 [2024-07-21 01:27:37.189936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.935 [2024-07-21 01:27:37.189974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.935 [2024-07-21 01:27:37.189996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.936 [2024-07-21 01:27:37.190006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:51.936 [2024-07-21 01:27:37.190023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.190042] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.936 [2024-07-21 01:27:37.190071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:51.936 [2024-07-21 01:27:37.190111] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.936 [2024-07-21 01:27:37.190139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:51.936 [2024-07-21 01:27:37.190226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:51.936 [2024-07-21 01:27:37.190246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.936 [2024-07-21 01:27:37.190258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:51.936 [2024-07-21 01:27:37.190271] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190282] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190293] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.936 [2024-07-21 01:27:37.190303] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.936 [2024-07-21 01:27:37.190320] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:51.936 [2024-07-21 01:27:37.190340] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:51.936 [2024-07-21 01:27:37.190350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.190359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.936 [2024-07-21 01:27:37.190369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:51.936 [2024-07-21 01:27:37.190378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.190451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.190463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.936 [2024-07-21 01:27:37.190472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:51.936 [2024-07-21 01:27:37.190481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.190562] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.936 [2024-07-21 01:27:37.190580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.936 [2024-07-21 01:27:37.190590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.936 [2024-07-21 01:27:37.190620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.936 [2024-07-21 01:27:37.190649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.936 [2024-07-21 01:27:37.190677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.936 [2024-07-21 01:27:37.190685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.936 [2024-07-21 01:27:37.190697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.936 [2024-07-21 01:27:37.190707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.936 [2024-07-21 01:27:37.190717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:51.936 [2024-07-21 01:27:37.190726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.936 [2024-07-21 01:27:37.190743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.936 [2024-07-21 01:27:37.190770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.936 [2024-07-21 01:27:37.190795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.936 [2024-07-21 01:27:37.190821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.936 [2024-07-21 01:27:37.190886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.936 [2024-07-21 01:27:37.190905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.936 [2024-07-21 01:27:37.190914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.936 [2024-07-21 01:27:37.190932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.936 [2024-07-21 01:27:37.190941] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:51.936 [2024-07-21 01:27:37.190951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.936 [2024-07-21 01:27:37.190961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:51.936 [2024-07-21 01:27:37.190970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:51.936 [2024-07-21 01:27:37.190979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.190989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:51.936 [2024-07-21 01:27:37.190998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:51.936 [2024-07-21 01:27:37.191008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.191016] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.936 [2024-07-21 01:27:37.191030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.936 [2024-07-21 01:27:37.191040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.936 [2024-07-21 01:27:37.191057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.936 [2024-07-21 01:27:37.191067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.936 [2024-07-21 01:27:37.191077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.936 [2024-07-21 01:27:37.191087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.936 [2024-07-21 01:27:37.191096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.936 [2024-07-21 01:27:37.191105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.936 [2024-07-21 01:27:37.191125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.936 [2024-07-21 01:27:37.191136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.936 [2024-07-21 01:27:37.191149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.936 [2024-07-21 01:27:37.191176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:51.936 [2024-07-21 01:27:37.191187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.936 [2024-07-21 01:27:37.191197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:51.936 [2024-07-21 01:27:37.191207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:51.936 [2024-07-21 01:27:37.191220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:51.936 [2024-07-21 01:27:37.191232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:51.936 [2024-07-21 01:27:37.191242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:51.936 [2024-07-21 01:27:37.191252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:51.936 [2024-07-21 01:27:37.191263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:51.936 [2024-07-21 01:27:37.191318] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.936 [2024-07-21 01:27:37.191329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.936 [2024-07-21 01:27:37.191350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.936 [2024-07-21 01:27:37.191360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.936 [2024-07-21 01:27:37.191371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.936 [2024-07-21 01:27:37.191381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.191394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.936 [2024-07-21 01:27:37.191407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:17:51.936 [2024-07-21 01:27:37.191424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.223042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.223085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.936 [2024-07-21 01:27:37.223105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.586 ms 00:17:51.936 [2024-07-21 01:27:37.223124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.223271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.223299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.936 [2024-07-21 01:27:37.223314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:51.936 [2024-07-21 01:27:37.223327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.239665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.239699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.936 [2024-07-21 01:27:37.239713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.334 ms 00:17:51.936 [2024-07-21 01:27:37.239727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.936 [2024-07-21 01:27:37.239818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.936 [2024-07-21 01:27:37.239831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.936 [2024-07-21 01:27:37.239852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:51.937 [2024-07-21 01:27:37.239862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.937 [2024-07-21 01:27:37.240620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.937 [2024-07-21 01:27:37.240647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.937 [2024-07-21 01:27:37.240659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:17:51.937 [2024-07-21 01:27:37.240669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.937 [2024-07-21 01:27:37.240806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.937 [2024-07-21 01:27:37.240845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.937 [2024-07-21 01:27:37.240856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:51.937 [2024-07-21 01:27:37.240866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.195 [2024-07-21 01:27:37.251498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.195 [2024-07-21 01:27:37.251528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.195 [2024-07-21 01:27:37.251542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.624 ms 00:17:52.195 [2024-07-21 01:27:37.251552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.195 [2024-07-21 01:27:37.255257] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:52.195 [2024-07-21 01:27:37.255292] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:52.195 [2024-07-21 01:27:37.255311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.195 [2024-07-21 01:27:37.255331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:52.195 [2024-07-21 01:27:37.255358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.627 ms 00:17:52.195 [2024-07-21 01:27:37.255367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.195 [2024-07-21 01:27:37.268261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.195 [2024-07-21 01:27:37.268299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:52.195 [2024-07-21 01:27:37.268334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.864 ms 00:17:52.195 [2024-07-21 01:27:37.268345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.195 [2024-07-21 01:27:37.270374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.195 [2024-07-21 01:27:37.270407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:52.196 [2024-07-21 01:27:37.270418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:17:52.196 [2024-07-21 01:27:37.270428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.271961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.271992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:52.196 [2024-07-21 01:27:37.272004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:17:52.196 [2024-07-21 01:27:37.272013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.272297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.272333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.196 [2024-07-21 01:27:37.272344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:52.196 [2024-07-21 01:27:37.272354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.302414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.302472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:52.196 [2024-07-21 01:27:37.302507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.081 ms 00:17:52.196 [2024-07-21 01:27:37.302519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.308955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:52.196 [2024-07-21 01:27:37.333511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.333552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:52.196 [2024-07-21 01:27:37.333568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.954 ms 00:17:52.196 [2024-07-21 01:27:37.333596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.333691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.333704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:52.196 [2024-07-21 01:27:37.333717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:52.196 [2024-07-21 01:27:37.333731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.333799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.333820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:52.196 [2024-07-21 01:27:37.333831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:52.196 [2024-07-21 01:27:37.333874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.333905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.333916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:52.196 [2024-07-21 01:27:37.333927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:52.196 [2024-07-21 01:27:37.333937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.333981] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:52.196 [2024-07-21 01:27:37.333994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.334004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:52.196 [2024-07-21 01:27:37.334022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:52.196 [2024-07-21 01:27:37.334032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.339060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.339095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:52.196 [2024-07-21 01:27:37.339108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:17:52.196 [2024-07-21 01:27:37.339119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.339222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.196 [2024-07-21 01:27:37.339235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:52.196 [2024-07-21 01:27:37.339257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:52.196 [2024-07-21 01:27:37.339267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.196 [2024-07-21 01:27:37.340670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:52.196 [2024-07-21 01:27:37.341633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 174.914 ms, result 0 00:17:52.196 [2024-07-21 01:27:37.342403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.196 [2024-07-21 01:27:37.350487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:04.039  Copying: 21/256 [MB] (21 MBps) Copying: 43/256 [MB] (21 MBps) Copying: 65/256 [MB] (22 MBps) Copying: 87/256 [MB] (21 MBps) Copying: 108/256 [MB] (21 MBps) Copying: 131/256 [MB] (22 MBps) Copying: 153/256 [MB] (22 MBps) Copying: 174/256 [MB] (21 MBps) Copying: 196/256 [MB] (21 MBps) Copying: 217/256 [MB] (21 MBps) Copying: 239/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-21 01:27:49.079373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.039 [2024-07-21 01:27:49.081505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.081549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.039 [2024-07-21 01:27:49.081566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:04.039 [2024-07-21 01:27:49.081577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.081598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:04.039 [2024-07-21 01:27:49.082789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.082814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.039 [2024-07-21 01:27:49.082834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:18:04.039 [2024-07-21 01:27:49.082845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.084934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.084987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.039 [2024-07-21 01:27:49.085000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:18:04.039 [2024-07-21 01:27:49.085016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.091573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.091611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.039 [2024-07-21 01:27:49.091623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.548 ms 00:18:04.039 [2024-07-21 01:27:49.091632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.097028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.097058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:04.039 [2024-07-21 01:27:49.097070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.354 ms 00:18:04.039 [2024-07-21 01:27:49.097085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.098601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.098639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.039 [2024-07-21 01:27:49.098651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:18:04.039 [2024-07-21 01:27:49.098660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.103248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.103283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.039 [2024-07-21 01:27:49.103322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.540 ms 00:18:04.039 [2024-07-21 01:27:49.103332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.103448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.103460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.039 [2024-07-21 01:27:49.103471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:04.039 [2024-07-21 01:27:49.103496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.105718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.105753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:04.039 [2024-07-21 01:27:49.105764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:18:04.039 [2024-07-21 01:27:49.105774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.107418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.107450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:04.039 [2024-07-21 01:27:49.107461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:18:04.039 [2024-07-21 01:27:49.107470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.108753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.108788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.039 [2024-07-21 01:27:49.108798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:18:04.039 [2024-07-21 01:27:49.108808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.110158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.039 [2024-07-21 01:27:49.110190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.039 [2024-07-21 01:27:49.110200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:18:04.039 [2024-07-21 01:27:49.110209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.039 [2024-07-21 01:27:49.110236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.039 [2024-07-21 01:27:49.110254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.039 [2024-07-21 01:27:49.110451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.110997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.040 [2024-07-21 01:27:49.111344] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.040 [2024-07-21 01:27:49.111354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:04.040 [2024-07-21 01:27:49.111365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.040 [2024-07-21 01:27:49.111386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.040 [2024-07-21 01:27:49.111396] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.040 [2024-07-21 01:27:49.111406] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.040 [2024-07-21 01:27:49.111416] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.040 [2024-07-21 01:27:49.111426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.040 [2024-07-21 01:27:49.111441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.040 [2024-07-21 01:27:49.111450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.040 [2024-07-21 01:27:49.111469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.040 [2024-07-21 01:27:49.111478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.040 [2024-07-21 01:27:49.111488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.040 [2024-07-21 01:27:49.111499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:18:04.040 [2024-07-21 01:27:49.111509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.040 [2024-07-21 01:27:49.114276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.040 [2024-07-21 01:27:49.114297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.040 [2024-07-21 01:27:49.114309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.746 ms 00:18:04.040 [2024-07-21 01:27:49.114319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.040 [2024-07-21 01:27:49.114490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.040 [2024-07-21 01:27:49.114500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.041 [2024-07-21 01:27:49.114511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:04.041 [2024-07-21 01:27:49.114521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.124372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.124399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.041 [2024-07-21 01:27:49.124411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.124425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.124508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.124519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.041 [2024-07-21 01:27:49.124529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.124549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.124595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.124607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.041 [2024-07-21 01:27:49.124617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.124626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.124658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.124668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.041 [2024-07-21 01:27:49.124678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.124687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.142910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.142963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.041 [2024-07-21 01:27:49.142977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.142988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.041 [2024-07-21 01:27:49.155315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.041 [2024-07-21 01:27:49.155421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.041 [2024-07-21 01:27:49.155492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.041 [2024-07-21 01:27:49.155625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.041 [2024-07-21 01:27:49.155721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.041 [2024-07-21 01:27:49.155809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.155912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.041 [2024-07-21 01:27:49.155929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.041 [2024-07-21 01:27:49.155940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.041 [2024-07-21 01:27:49.155951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.041 [2024-07-21 01:27:49.156114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.696 ms, result 0 00:18:04.620 00:18:04.620 00:18:04.620 01:27:49 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:04.620 01:27:49 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89667 00:18:04.620 01:27:49 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89667 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89667 ']' 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:04.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:04.620 01:27:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:04.620 [2024-07-21 01:27:49.766781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:04.620 [2024-07-21 01:27:49.766917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89667 ] 00:18:04.879 [2024-07-21 01:27:49.932673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.879 [2024-07-21 01:27:49.995753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.446 01:27:50 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.446 01:27:50 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:05.446 01:27:50 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:05.446 [2024-07-21 01:27:50.719750] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.446 [2024-07-21 01:27:50.719815] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.706 [2024-07-21 01:27:50.890316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.890366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:05.706 [2024-07-21 01:27:50.890384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:05.706 [2024-07-21 01:27:50.890394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.892943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.892986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.706 [2024-07-21 01:27:50.893004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:18:05.706 [2024-07-21 01:27:50.893015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.893094] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:05.706 [2024-07-21 01:27:50.893390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:05.706 [2024-07-21 01:27:50.893422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.893433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.706 [2024-07-21 01:27:50.893446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:18:05.706 [2024-07-21 01:27:50.893456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.895867] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:05.706 [2024-07-21 01:27:50.899347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.899398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:05.706 [2024-07-21 01:27:50.899412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:18:05.706 [2024-07-21 01:27:50.899425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.899499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.899515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:05.706 [2024-07-21 01:27:50.899526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:05.706 [2024-07-21 01:27:50.899542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.911238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.911273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.706 [2024-07-21 01:27:50.911285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.659 ms 00:18:05.706 [2024-07-21 01:27:50.911298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.911409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.911426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.706 [2024-07-21 01:27:50.911444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:05.706 [2024-07-21 01:27:50.911460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.911502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.911516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:05.706 [2024-07-21 01:27:50.911526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:05.706 [2024-07-21 01:27:50.911537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.706 [2024-07-21 01:27:50.911561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:05.706 [2024-07-21 01:27:50.914104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.706 [2024-07-21 01:27:50.914128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.707 [2024-07-21 01:27:50.914145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:18:05.707 [2024-07-21 01:27:50.914158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.914199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.914209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:05.707 [2024-07-21 01:27:50.914222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:05.707 [2024-07-21 01:27:50.914231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.914256] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:05.707 [2024-07-21 01:27:50.914291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:05.707 [2024-07-21 01:27:50.914335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:05.707 [2024-07-21 01:27:50.914357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:05.707 [2024-07-21 01:27:50.914452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:05.707 [2024-07-21 01:27:50.914469] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:05.707 [2024-07-21 01:27:50.914485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:05.707 [2024-07-21 01:27:50.914497] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:05.707 [2024-07-21 01:27:50.914511] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:05.707 [2024-07-21 01:27:50.914533] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:05.707 [2024-07-21 01:27:50.914550] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:05.707 [2024-07-21 01:27:50.914559] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:05.707 [2024-07-21 01:27:50.914578] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:05.707 [2024-07-21 01:27:50.914591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.914604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:05.707 [2024-07-21 01:27:50.914614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:18:05.707 [2024-07-21 01:27:50.914627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.914694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.914707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:05.707 [2024-07-21 01:27:50.914717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:05.707 [2024-07-21 01:27:50.914731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.914811] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:05.707 [2024-07-21 01:27:50.914859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:05.707 [2024-07-21 01:27:50.914869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.707 [2024-07-21 01:27:50.914882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.914892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:05.707 [2024-07-21 01:27:50.914908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.914917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:05.707 [2024-07-21 01:27:50.914929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:05.707 [2024-07-21 01:27:50.914938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:05.707 [2024-07-21 01:27:50.914950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.707 [2024-07-21 01:27:50.914959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:05.707 [2024-07-21 01:27:50.914974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:05.707 [2024-07-21 01:27:50.914983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.707 [2024-07-21 01:27:50.914994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:05.707 [2024-07-21 01:27:50.915004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:05.707 [2024-07-21 01:27:50.915016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:05.707 [2024-07-21 01:27:50.915036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:05.707 [2024-07-21 01:27:50.915075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:05.707 [2024-07-21 01:27:50.915111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:05.707 [2024-07-21 01:27:50.915140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:05.707 [2024-07-21 01:27:50.915173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:05.707 [2024-07-21 01:27:50.915201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.707 [2024-07-21 01:27:50.915221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:05.707 [2024-07-21 01:27:50.915232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:05.707 [2024-07-21 01:27:50.915241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.707 [2024-07-21 01:27:50.915255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:05.707 [2024-07-21 01:27:50.915263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:05.707 [2024-07-21 01:27:50.915274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:05.707 [2024-07-21 01:27:50.915294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:05.707 [2024-07-21 01:27:50.915302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915314] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:05.707 [2024-07-21 01:27:50.915324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:05.707 [2024-07-21 01:27:50.915336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.707 [2024-07-21 01:27:50.915357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:05.707 [2024-07-21 01:27:50.915366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:05.707 [2024-07-21 01:27:50.915378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:05.707 [2024-07-21 01:27:50.915387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:05.707 [2024-07-21 01:27:50.915398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:05.707 [2024-07-21 01:27:50.915407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:05.707 [2024-07-21 01:27:50.915423] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:05.707 [2024-07-21 01:27:50.915435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:05.707 [2024-07-21 01:27:50.915460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:05.707 [2024-07-21 01:27:50.915483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:05.707 [2024-07-21 01:27:50.915494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:05.707 [2024-07-21 01:27:50.915507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:05.707 [2024-07-21 01:27:50.915519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:05.707 [2024-07-21 01:27:50.915532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:05.707 [2024-07-21 01:27:50.915541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:05.707 [2024-07-21 01:27:50.915554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:05.707 [2024-07-21 01:27:50.915564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:05.707 [2024-07-21 01:27:50.915622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:05.707 [2024-07-21 01:27:50.915636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:05.707 [2024-07-21 01:27:50.915679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:05.707 [2024-07-21 01:27:50.915691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:05.707 [2024-07-21 01:27:50.915701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:05.707 [2024-07-21 01:27:50.915715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.915727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:05.707 [2024-07-21 01:27:50.915747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:18:05.707 [2024-07-21 01:27:50.915756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.936340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.936373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.707 [2024-07-21 01:27:50.936389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.495 ms 00:18:05.707 [2024-07-21 01:27:50.936401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.936512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.936528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.707 [2024-07-21 01:27:50.936546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:05.707 [2024-07-21 01:27:50.936555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.953369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.953403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.707 [2024-07-21 01:27:50.953419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:18:05.707 [2024-07-21 01:27:50.953429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.953507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.953519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.707 [2024-07-21 01:27:50.953532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.707 [2024-07-21 01:27:50.953542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.954280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.954300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.707 [2024-07-21 01:27:50.954315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:18:05.707 [2024-07-21 01:27:50.954325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.954450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.954466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.707 [2024-07-21 01:27:50.954483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:05.707 [2024-07-21 01:27:50.954493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.707 [2024-07-21 01:27:50.965897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.707 [2024-07-21 01:27:50.965928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.707 [2024-07-21 01:27:50.965944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.398 ms 00:18:05.707 [2024-07-21 01:27:50.965956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.708 [2024-07-21 01:27:50.969696] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:05.708 [2024-07-21 01:27:50.969732] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:05.708 [2024-07-21 01:27:50.969751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.708 [2024-07-21 01:27:50.969763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:05.708 [2024-07-21 01:27:50.969777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:18:05.708 [2024-07-21 01:27:50.969788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.708 [2024-07-21 01:27:50.982623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.708 [2024-07-21 01:27:50.982663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:05.708 [2024-07-21 01:27:50.982680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:18:05.708 [2024-07-21 01:27:50.982689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.708 [2024-07-21 01:27:50.984416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.708 [2024-07-21 01:27:50.984446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:05.708 [2024-07-21 01:27:50.984460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:18:05.708 [2024-07-21 01:27:50.984469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.708 [2024-07-21 01:27:50.985893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.708 [2024-07-21 01:27:50.985923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:05.708 [2024-07-21 01:27:50.985938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:18:05.708 [2024-07-21 01:27:50.985948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.708 [2024-07-21 01:27:50.986239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.708 [2024-07-21 01:27:50.986261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.708 [2024-07-21 01:27:50.986277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:05.708 [2024-07-21 01:27:50.986288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.028822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.028890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:05.967 [2024-07-21 01:27:51.028911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.567 ms 00:18:05.967 [2024-07-21 01:27:51.028923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.034996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:05.967 [2024-07-21 01:27:51.058033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.058074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.967 [2024-07-21 01:27:51.058090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.070 ms 00:18:05.967 [2024-07-21 01:27:51.058104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.058185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.058201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:05.967 [2024-07-21 01:27:51.058217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.967 [2024-07-21 01:27:51.058231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.058294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.058309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.967 [2024-07-21 01:27:51.058320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:05.967 [2024-07-21 01:27:51.058333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.058359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.058372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.967 [2024-07-21 01:27:51.058382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:05.967 [2024-07-21 01:27:51.058402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.058443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:05.967 [2024-07-21 01:27:51.058467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.058477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:05.967 [2024-07-21 01:27:51.058490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:05.967 [2024-07-21 01:27:51.058500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.063405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.063440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.967 [2024-07-21 01:27:51.063457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:18:05.967 [2024-07-21 01:27:51.063468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.063553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.063566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.967 [2024-07-21 01:27:51.063580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:05.967 [2024-07-21 01:27:51.063590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.064989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.967 [2024-07-21 01:27:51.065986] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 174.564 ms, result 0 00:18:05.967 [2024-07-21 01:27:51.067079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.967 Some configs were skipped because the RPC state that can call them passed over. 00:18:05.967 01:27:51 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:05.967 [2024-07-21 01:27:51.267770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.967 [2024-07-21 01:27:51.267817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:05.967 [2024-07-21 01:27:51.267850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:18:05.967 [2024-07-21 01:27:51.267866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.967 [2024-07-21 01:27:51.267916] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.648 ms, result 0 00:18:05.967 true 00:18:06.226 01:27:51 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:06.226 [2024-07-21 01:27:51.459302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.226 [2024-07-21 01:27:51.459339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:06.226 [2024-07-21 01:27:51.459355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:18:06.226 [2024-07-21 01:27:51.459364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.226 [2024-07-21 01:27:51.459401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.255 ms, result 0 00:18:06.226 true 00:18:06.226 01:27:51 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89667 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89667 ']' 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89667 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89667 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:06.226 killing process with pid 89667 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89667' 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89667 00:18:06.226 01:27:51 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89667 00:18:06.486 [2024-07-21 01:27:51.747332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.747400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.486 [2024-07-21 01:27:51.747416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.486 [2024-07-21 01:27:51.747429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.747455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:06.486 [2024-07-21 01:27:51.748662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.748692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.486 [2024-07-21 01:27:51.748706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:18:06.486 [2024-07-21 01:27:51.748720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.749056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.749085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.486 [2024-07-21 01:27:51.749099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:18:06.486 [2024-07-21 01:27:51.749110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.752420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.752455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.486 [2024-07-21 01:27:51.752481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:18:06.486 [2024-07-21 01:27:51.752501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.757910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.757945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.486 [2024-07-21 01:27:51.757959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:18:06.486 [2024-07-21 01:27:51.757968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.759627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.759663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.486 [2024-07-21 01:27:51.759677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:18:06.486 [2024-07-21 01:27:51.759686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.764489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.764524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.486 [2024-07-21 01:27:51.764537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:18:06.486 [2024-07-21 01:27:51.764547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.764682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.764694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.486 [2024-07-21 01:27:51.764708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:06.486 [2024-07-21 01:27:51.764717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.767155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.767187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:06.486 [2024-07-21 01:27:51.767202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:18:06.486 [2024-07-21 01:27:51.767211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.768914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.768946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:06.486 [2024-07-21 01:27:51.768960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:18:06.486 [2024-07-21 01:27:51.768969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.770311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.770342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.486 [2024-07-21 01:27:51.770355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:18:06.486 [2024-07-21 01:27:51.770363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.771643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.486 [2024-07-21 01:27:51.771675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.486 [2024-07-21 01:27:51.771689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:18:06.486 [2024-07-21 01:27:51.771698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.486 [2024-07-21 01:27:51.771730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.486 [2024-07-21 01:27:51.771748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.771984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.486 [2024-07-21 01:27:51.772488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.772986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.773002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.487 [2024-07-21 01:27:51.773018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.487 [2024-07-21 01:27:51.773030] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:06.487 [2024-07-21 01:27:51.773041] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.487 [2024-07-21 01:27:51.773054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.487 [2024-07-21 01:27:51.773066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.487 [2024-07-21 01:27:51.773080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.487 [2024-07-21 01:27:51.773089] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.487 [2024-07-21 01:27:51.773101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.487 [2024-07-21 01:27:51.773111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.487 [2024-07-21 01:27:51.773122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.487 [2024-07-21 01:27:51.773130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.487 [2024-07-21 01:27:51.773141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.487 [2024-07-21 01:27:51.773151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.487 [2024-07-21 01:27:51.773164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:18:06.487 [2024-07-21 01:27:51.773173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.775739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.487 [2024-07-21 01:27:51.775760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.487 [2024-07-21 01:27:51.775781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:18:06.487 [2024-07-21 01:27:51.775790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.775968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.487 [2024-07-21 01:27:51.775980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.487 [2024-07-21 01:27:51.775993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:06.487 [2024-07-21 01:27:51.776003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.785975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.487 [2024-07-21 01:27:51.786003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.487 [2024-07-21 01:27:51.786017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.487 [2024-07-21 01:27:51.786031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.786111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.487 [2024-07-21 01:27:51.786124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.487 [2024-07-21 01:27:51.786137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.487 [2024-07-21 01:27:51.786147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.786202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.487 [2024-07-21 01:27:51.786217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.487 [2024-07-21 01:27:51.786229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.487 [2024-07-21 01:27:51.786239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.487 [2024-07-21 01:27:51.786261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.487 [2024-07-21 01:27:51.786271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.487 [2024-07-21 01:27:51.786283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.487 [2024-07-21 01:27:51.786294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.771 [2024-07-21 01:27:51.805009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.771 [2024-07-21 01:27:51.805043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.771 [2024-07-21 01:27:51.805073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.771 [2024-07-21 01:27:51.805084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.771 [2024-07-21 01:27:51.817462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.771 [2024-07-21 01:27:51.817497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.771 [2024-07-21 01:27:51.817513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.771 [2024-07-21 01:27:51.817523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.771 [2024-07-21 01:27:51.817589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.771 [2024-07-21 01:27:51.817602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.771 [2024-07-21 01:27:51.817620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.771 [2024-07-21 01:27:51.817630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.771 [2024-07-21 01:27:51.817668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.771 [2024-07-21 01:27:51.817679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.771 [2024-07-21 01:27:51.817692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.771 [2024-07-21 01:27:51.817702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.771 [2024-07-21 01:27:51.817805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.771 [2024-07-21 01:27:51.817818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.771 [2024-07-21 01:27:51.817831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.772 [2024-07-21 01:27:51.817855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.772 [2024-07-21 01:27:51.817904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.772 [2024-07-21 01:27:51.817916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.772 [2024-07-21 01:27:51.817930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.772 [2024-07-21 01:27:51.817939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.772 [2024-07-21 01:27:51.817999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.772 [2024-07-21 01:27:51.818010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.772 [2024-07-21 01:27:51.818023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.772 [2024-07-21 01:27:51.818035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.772 [2024-07-21 01:27:51.818091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.772 [2024-07-21 01:27:51.818103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.772 [2024-07-21 01:27:51.818117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.772 [2024-07-21 01:27:51.818127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.772 [2024-07-21 01:27:51.818294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.043 ms, result 0 00:18:07.029 01:27:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:07.029 01:27:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.030 [2024-07-21 01:27:52.260219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:07.030 [2024-07-21 01:27:52.260347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89703 ] 00:18:07.288 [2024-07-21 01:27:52.430039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.288 [2024-07-21 01:27:52.492992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.547 [2024-07-21 01:27:52.638234] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:07.547 [2024-07-21 01:27:52.638308] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:07.547 [2024-07-21 01:27:52.793081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.793128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:07.547 [2024-07-21 01:27:52.793146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:07.547 [2024-07-21 01:27:52.793156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.795727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.795762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.547 [2024-07-21 01:27:52.795774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:18:07.547 [2024-07-21 01:27:52.795784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.795873] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:07.547 [2024-07-21 01:27:52.796154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:07.547 [2024-07-21 01:27:52.796178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.796189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.547 [2024-07-21 01:27:52.796203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:07.547 [2024-07-21 01:27:52.796219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.798584] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:07.547 [2024-07-21 01:27:52.802018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.802057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:07.547 [2024-07-21 01:27:52.802070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:18:07.547 [2024-07-21 01:27:52.802089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.802169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.802182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:07.547 [2024-07-21 01:27:52.802197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:07.547 [2024-07-21 01:27:52.802207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.814069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.814099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.547 [2024-07-21 01:27:52.814112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.832 ms 00:18:07.547 [2024-07-21 01:27:52.814122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.814251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.814266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.547 [2024-07-21 01:27:52.814277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:07.547 [2024-07-21 01:27:52.814291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.814326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.814337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:07.547 [2024-07-21 01:27:52.814347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:07.547 [2024-07-21 01:27:52.814357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.814379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:07.547 [2024-07-21 01:27:52.816977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.817000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.547 [2024-07-21 01:27:52.817017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:18:07.547 [2024-07-21 01:27:52.817028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.817070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.817081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:07.547 [2024-07-21 01:27:52.817092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:07.547 [2024-07-21 01:27:52.817103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.817123] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:07.547 [2024-07-21 01:27:52.817172] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:07.547 [2024-07-21 01:27:52.817216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:07.547 [2024-07-21 01:27:52.817245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:07.547 [2024-07-21 01:27:52.817334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:07.547 [2024-07-21 01:27:52.817349] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:07.547 [2024-07-21 01:27:52.817363] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:07.547 [2024-07-21 01:27:52.817377] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:07.547 [2024-07-21 01:27:52.817390] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:07.547 [2024-07-21 01:27:52.817402] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:07.547 [2024-07-21 01:27:52.817413] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:07.547 [2024-07-21 01:27:52.817423] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:07.547 [2024-07-21 01:27:52.817437] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:07.547 [2024-07-21 01:27:52.817448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.817459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:07.547 [2024-07-21 01:27:52.817470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:18:07.547 [2024-07-21 01:27:52.817480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.547 [2024-07-21 01:27:52.817557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.547 [2024-07-21 01:27:52.817569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:07.547 [2024-07-21 01:27:52.817580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:07.548 [2024-07-21 01:27:52.817597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.548 [2024-07-21 01:27:52.817694] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:07.548 [2024-07-21 01:27:52.817708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:07.548 [2024-07-21 01:27:52.817719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.548 [2024-07-21 01:27:52.817730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:07.548 [2024-07-21 01:27:52.817751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:07.548 [2024-07-21 01:27:52.817783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:07.548 [2024-07-21 01:27:52.817803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.548 [2024-07-21 01:27:52.817832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:07.548 [2024-07-21 01:27:52.817842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:07.548 [2024-07-21 01:27:52.817866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.548 [2024-07-21 01:27:52.817892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:07.548 [2024-07-21 01:27:52.817902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:07.548 [2024-07-21 01:27:52.817912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:07.548 [2024-07-21 01:27:52.817932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:07.548 [2024-07-21 01:27:52.817941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:07.548 [2024-07-21 01:27:52.817961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:07.548 [2024-07-21 01:27:52.817981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.548 [2024-07-21 01:27:52.817990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:07.548 [2024-07-21 01:27:52.817998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.548 [2024-07-21 01:27:52.818016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:07.548 [2024-07-21 01:27:52.818025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.548 [2024-07-21 01:27:52.818049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:07.548 [2024-07-21 01:27:52.818058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.548 [2024-07-21 01:27:52.818074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:07.548 [2024-07-21 01:27:52.818083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.548 [2024-07-21 01:27:52.818100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:07.548 [2024-07-21 01:27:52.818108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:07.548 [2024-07-21 01:27:52.818117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.548 [2024-07-21 01:27:52.818126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:07.548 [2024-07-21 01:27:52.818135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:07.548 [2024-07-21 01:27:52.818143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:07.548 [2024-07-21 01:27:52.818160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:07.548 [2024-07-21 01:27:52.818169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818177] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:07.548 [2024-07-21 01:27:52.818190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:07.548 [2024-07-21 01:27:52.818201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.548 [2024-07-21 01:27:52.818211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.548 [2024-07-21 01:27:52.818221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:07.548 [2024-07-21 01:27:52.818230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:07.548 [2024-07-21 01:27:52.818239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:07.548 [2024-07-21 01:27:52.818248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:07.548 [2024-07-21 01:27:52.818257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:07.548 [2024-07-21 01:27:52.818265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:07.548 [2024-07-21 01:27:52.818276] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:07.548 [2024-07-21 01:27:52.818288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:07.548 [2024-07-21 01:27:52.818312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:07.548 [2024-07-21 01:27:52.818323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:07.548 [2024-07-21 01:27:52.818333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:07.548 [2024-07-21 01:27:52.818343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:07.548 [2024-07-21 01:27:52.818356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:07.548 [2024-07-21 01:27:52.818366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:07.548 [2024-07-21 01:27:52.818376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:07.548 [2024-07-21 01:27:52.818385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:07.548 [2024-07-21 01:27:52.818395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:07.548 [2024-07-21 01:27:52.818445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:07.548 [2024-07-21 01:27:52.818455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.548 [2024-07-21 01:27:52.818475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.548 [2024-07-21 01:27:52.818485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.548 [2024-07-21 01:27:52.818494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.548 [2024-07-21 01:27:52.818504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.548 [2024-07-21 01:27:52.818517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.548 [2024-07-21 01:27:52.818530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:18:07.548 [2024-07-21 01:27:52.818540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.548 [2024-07-21 01:27:52.848347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.548 [2024-07-21 01:27:52.848383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.548 [2024-07-21 01:27:52.848413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.788 ms 00:18:07.548 [2024-07-21 01:27:52.848431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.548 [2024-07-21 01:27:52.848582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.548 [2024-07-21 01:27:52.848610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:07.548 [2024-07-21 01:27:52.848625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:07.548 [2024-07-21 01:27:52.848648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.864749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.864780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.807 [2024-07-21 01:27:52.864793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.086 ms 00:18:07.807 [2024-07-21 01:27:52.864816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.864893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.864906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.807 [2024-07-21 01:27:52.864918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:07.807 [2024-07-21 01:27:52.864937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.865651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.865671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.807 [2024-07-21 01:27:52.865682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:18:07.807 [2024-07-21 01:27:52.865702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.865866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.865896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.807 [2024-07-21 01:27:52.865914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:07.807 [2024-07-21 01:27:52.865925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.876028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.876055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.807 [2024-07-21 01:27:52.876067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.088 ms 00:18:07.807 [2024-07-21 01:27:52.876077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.807 [2024-07-21 01:27:52.879753] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:07.807 [2024-07-21 01:27:52.879783] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:07.807 [2024-07-21 01:27:52.879811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.807 [2024-07-21 01:27:52.879822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:07.807 [2024-07-21 01:27:52.879867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:18:07.807 [2024-07-21 01:27:52.879876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.892958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.892991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:07.808 [2024-07-21 01:27:52.893011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:18:07.808 [2024-07-21 01:27:52.893022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.894954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.894980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:07.808 [2024-07-21 01:27:52.894992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:18:07.808 [2024-07-21 01:27:52.895002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.896605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.896626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:07.808 [2024-07-21 01:27:52.896647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:18:07.808 [2024-07-21 01:27:52.896657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.896956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.896974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:07.808 [2024-07-21 01:27:52.896986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:18:07.808 [2024-07-21 01:27:52.896996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.926074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.926120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:07.808 [2024-07-21 01:27:52.926136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.098 ms 00:18:07.808 [2024-07-21 01:27:52.926148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.932123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:07.808 [2024-07-21 01:27:52.955129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.955163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:07.808 [2024-07-21 01:27:52.955178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.932 ms 00:18:07.808 [2024-07-21 01:27:52.955188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.955273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.955296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:07.808 [2024-07-21 01:27:52.955312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:07.808 [2024-07-21 01:27:52.955323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.955386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.955397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:07.808 [2024-07-21 01:27:52.955407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:07.808 [2024-07-21 01:27:52.955417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.955441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.955452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:07.808 [2024-07-21 01:27:52.955462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:07.808 [2024-07-21 01:27:52.955476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.955514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:07.808 [2024-07-21 01:27:52.955526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.955537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:07.808 [2024-07-21 01:27:52.955554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:07.808 [2024-07-21 01:27:52.955564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.960365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.960396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:07.808 [2024-07-21 01:27:52.960409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:18:07.808 [2024-07-21 01:27:52.960427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.808 [2024-07-21 01:27:52.960507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.808 [2024-07-21 01:27:52.960528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:07.808 [2024-07-21 01:27:52.960540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:07.808 [2024-07-21 01:27:52.960561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.809 [2024-07-21 01:27:52.961872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.809 [2024-07-21 01:27:52.962788] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 168.706 ms, result 0 00:18:07.809 [2024-07-21 01:27:52.963672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.809 [2024-07-21 01:27:52.971743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.629  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (23 MBps) Copying: 72/256 [MB] (23 MBps) Copying: 96/256 [MB] (23 MBps) Copying: 119/256 [MB] (23 MBps) Copying: 143/256 [MB] (23 MBps) Copying: 167/256 [MB] (23 MBps) Copying: 191/256 [MB] (24 MBps) Copying: 215/256 [MB] (23 MBps) Copying: 239/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-21 01:28:03.680067] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.629 [2024-07-21 01:28:03.682139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.682170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.629 [2024-07-21 01:28:03.682186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:18.629 [2024-07-21 01:28:03.682196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.682218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:18.629 [2024-07-21 01:28:03.683281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.683300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.629 [2024-07-21 01:28:03.683311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:18:18.629 [2024-07-21 01:28:03.683320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.683533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.683549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.629 [2024-07-21 01:28:03.683563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:18.629 [2024-07-21 01:28:03.683573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.686322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.686343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.629 [2024-07-21 01:28:03.686353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:18:18.629 [2024-07-21 01:28:03.686362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.691595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.691624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:18.629 [2024-07-21 01:28:03.691651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.224 ms 00:18:18.629 [2024-07-21 01:28:03.691660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.693153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.693187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.629 [2024-07-21 01:28:03.693199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:18:18.629 [2024-07-21 01:28:03.693209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.697948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.697991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.629 [2024-07-21 01:28:03.698003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:18:18.629 [2024-07-21 01:28:03.698012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.698124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.698136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.629 [2024-07-21 01:28:03.698161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:18.629 [2024-07-21 01:28:03.698171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.700726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.700757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:18.629 [2024-07-21 01:28:03.700768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:18:18.629 [2024-07-21 01:28:03.700777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.702559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.702589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:18.629 [2024-07-21 01:28:03.702599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:18:18.629 [2024-07-21 01:28:03.702608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.704050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.704079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.629 [2024-07-21 01:28:03.704089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:18:18.629 [2024-07-21 01:28:03.704098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.705433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.629 [2024-07-21 01:28:03.705461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.629 [2024-07-21 01:28:03.705471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:18:18.629 [2024-07-21 01:28:03.705481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.629 [2024-07-21 01:28:03.705509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.629 [2024-07-21 01:28:03.705528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.629 [2024-07-21 01:28:03.705884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.705990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.630 [2024-07-21 01:28:03.706587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.630 [2024-07-21 01:28:03.706597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:18.630 [2024-07-21 01:28:03.706607] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.630 [2024-07-21 01:28:03.706617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.630 [2024-07-21 01:28:03.706626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.630 [2024-07-21 01:28:03.706636] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.630 [2024-07-21 01:28:03.706655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.630 [2024-07-21 01:28:03.706669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.630 [2024-07-21 01:28:03.706678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.630 [2024-07-21 01:28:03.706696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.630 [2024-07-21 01:28:03.706705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.630 [2024-07-21 01:28:03.706714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.630 [2024-07-21 01:28:03.706723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.630 [2024-07-21 01:28:03.706743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:18:18.630 [2024-07-21 01:28:03.706757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.709312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.630 [2024-07-21 01:28:03.709334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.630 [2024-07-21 01:28:03.709344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:18:18.630 [2024-07-21 01:28:03.709357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.709522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.630 [2024-07-21 01:28:03.709540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.630 [2024-07-21 01:28:03.709551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:18.630 [2024-07-21 01:28:03.709561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.718502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.718524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.630 [2024-07-21 01:28:03.718539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.718550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.718618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.718629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.630 [2024-07-21 01:28:03.718639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.718649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.718690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.718702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.630 [2024-07-21 01:28:03.718713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.718726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.718743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.718753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.630 [2024-07-21 01:28:03.718763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.718772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.735500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.735532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.630 [2024-07-21 01:28:03.735545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.735563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.747956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.747987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.630 [2024-07-21 01:28:03.748000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.630 [2024-07-21 01:28:03.748079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.630 [2024-07-21 01:28:03.748149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.630 [2024-07-21 01:28:03.748266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.630 [2024-07-21 01:28:03.748338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.630 [2024-07-21 01:28:03.748415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.630 [2024-07-21 01:28:03.748489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.630 [2024-07-21 01:28:03.748499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.630 [2024-07-21 01:28:03.748509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.630 [2024-07-21 01:28:03.748689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.600 ms, result 0 00:18:18.889 00:18:18.889 00:18:18.889 01:28:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:18.889 01:28:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:19.455 01:28:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.455 [2024-07-21 01:28:04.648608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:19.455 [2024-07-21 01:28:04.648738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89835 ] 00:18:19.714 [2024-07-21 01:28:04.824403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.714 [2024-07-21 01:28:04.888811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.976 [2024-07-21 01:28:05.033722] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.976 [2024-07-21 01:28:05.033819] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.976 [2024-07-21 01:28:05.188241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.188286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.976 [2024-07-21 01:28:05.188309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:19.976 [2024-07-21 01:28:05.188320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.190894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.190929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.976 [2024-07-21 01:28:05.190940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:18:19.976 [2024-07-21 01:28:05.190950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.191022] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.976 [2024-07-21 01:28:05.191325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.976 [2024-07-21 01:28:05.191355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.191365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.976 [2024-07-21 01:28:05.191380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:18:19.976 [2024-07-21 01:28:05.191392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.193768] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.976 [2024-07-21 01:28:05.197283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.197316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.976 [2024-07-21 01:28:05.197329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.522 ms 00:18:19.976 [2024-07-21 01:28:05.197348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.197424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.197444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.976 [2024-07-21 01:28:05.197458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:19.976 [2024-07-21 01:28:05.197469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.209154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.209184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.976 [2024-07-21 01:28:05.209197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.652 ms 00:18:19.976 [2024-07-21 01:28:05.209206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.209374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.209393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.976 [2024-07-21 01:28:05.209411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:19.976 [2024-07-21 01:28:05.209426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.209460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.209471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.976 [2024-07-21 01:28:05.209480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:19.976 [2024-07-21 01:28:05.209489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.209512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:19.976 [2024-07-21 01:28:05.211995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.212026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.976 [2024-07-21 01:28:05.212042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:18:19.976 [2024-07-21 01:28:05.212052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.212088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.976 [2024-07-21 01:28:05.212099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.976 [2024-07-21 01:28:05.212110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:19.976 [2024-07-21 01:28:05.212120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.976 [2024-07-21 01:28:05.212139] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.977 [2024-07-21 01:28:05.212166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.977 [2024-07-21 01:28:05.212202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.977 [2024-07-21 01:28:05.212223] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:19.977 [2024-07-21 01:28:05.212305] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.977 [2024-07-21 01:28:05.212325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.977 [2024-07-21 01:28:05.212338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:19.977 [2024-07-21 01:28:05.212350] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212362] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212373] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:19.977 [2024-07-21 01:28:05.212383] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.977 [2024-07-21 01:28:05.212393] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.977 [2024-07-21 01:28:05.212413] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.977 [2024-07-21 01:28:05.212425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.977 [2024-07-21 01:28:05.212435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.977 [2024-07-21 01:28:05.212445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:19.977 [2024-07-21 01:28:05.212454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.977 [2024-07-21 01:28:05.212524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.977 [2024-07-21 01:28:05.212535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.977 [2024-07-21 01:28:05.212545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:19.977 [2024-07-21 01:28:05.212554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.977 [2024-07-21 01:28:05.212643] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.977 [2024-07-21 01:28:05.212657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.977 [2024-07-21 01:28:05.212667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.977 [2024-07-21 01:28:05.212713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.977 [2024-07-21 01:28:05.212744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.977 [2024-07-21 01:28:05.212775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.977 [2024-07-21 01:28:05.212785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:19.977 [2024-07-21 01:28:05.212798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.977 [2024-07-21 01:28:05.212807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.977 [2024-07-21 01:28:05.212817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:19.977 [2024-07-21 01:28:05.212826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.977 [2024-07-21 01:28:05.212857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.977 [2024-07-21 01:28:05.212886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.977 [2024-07-21 01:28:05.212914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.977 [2024-07-21 01:28:05.212951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.977 [2024-07-21 01:28:05.212980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.977 [2024-07-21 01:28:05.212990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:19.977 [2024-07-21 01:28:05.212999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.977 [2024-07-21 01:28:05.213008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.977 [2024-07-21 01:28:05.213017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:19.977 [2024-07-21 01:28:05.213027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.977 [2024-07-21 01:28:05.213036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.977 [2024-07-21 01:28:05.213045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:19.977 [2024-07-21 01:28:05.213054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.977 [2024-07-21 01:28:05.213064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.977 [2024-07-21 01:28:05.213073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:19.977 [2024-07-21 01:28:05.213082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.213091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.977 [2024-07-21 01:28:05.213101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:19.977 [2024-07-21 01:28:05.213111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.213120] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.977 [2024-07-21 01:28:05.213133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.977 [2024-07-21 01:28:05.213142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.977 [2024-07-21 01:28:05.213152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.977 [2024-07-21 01:28:05.213162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.977 [2024-07-21 01:28:05.213171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.977 [2024-07-21 01:28:05.213180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.977 [2024-07-21 01:28:05.213190] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.977 [2024-07-21 01:28:05.213199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.977 [2024-07-21 01:28:05.213208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.977 [2024-07-21 01:28:05.213218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.977 [2024-07-21 01:28:05.213230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:19.977 [2024-07-21 01:28:05.213256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:19.977 [2024-07-21 01:28:05.213266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:19.977 [2024-07-21 01:28:05.213276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:19.977 [2024-07-21 01:28:05.213286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:19.977 [2024-07-21 01:28:05.213300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:19.977 [2024-07-21 01:28:05.213311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:19.977 [2024-07-21 01:28:05.213322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:19.977 [2024-07-21 01:28:05.213332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:19.977 [2024-07-21 01:28:05.213342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:19.977 [2024-07-21 01:28:05.213392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.977 [2024-07-21 01:28:05.213403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.977 [2024-07-21 01:28:05.213424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.977 [2024-07-21 01:28:05.213435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.977 [2024-07-21 01:28:05.213446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.977 [2024-07-21 01:28:05.213456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.977 [2024-07-21 01:28:05.213470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.977 [2024-07-21 01:28:05.213480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:18:19.977 [2024-07-21 01:28:05.213497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.977 [2024-07-21 01:28:05.243116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.977 [2024-07-21 01:28:05.243156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.977 [2024-07-21 01:28:05.243174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.599 ms 00:18:19.977 [2024-07-21 01:28:05.243191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.977 [2024-07-21 01:28:05.243328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.977 [2024-07-21 01:28:05.243344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.977 [2024-07-21 01:28:05.243358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:19.978 [2024-07-21 01:28:05.243370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.258919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.258950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.978 [2024-07-21 01:28:05.258962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.545 ms 00:18:19.978 [2024-07-21 01:28:05.258976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.259047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.259058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.978 [2024-07-21 01:28:05.259070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:19.978 [2024-07-21 01:28:05.259087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.259864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.259883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.978 [2024-07-21 01:28:05.259895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:18:19.978 [2024-07-21 01:28:05.259908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.260037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.260050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.978 [2024-07-21 01:28:05.260062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:19.978 [2024-07-21 01:28:05.260072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.270155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.270183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.978 [2024-07-21 01:28:05.270197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.075 ms 00:18:19.978 [2024-07-21 01:28:05.270208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.978 [2024-07-21 01:28:05.274483] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:19.978 [2024-07-21 01:28:05.274516] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.978 [2024-07-21 01:28:05.274543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.978 [2024-07-21 01:28:05.274554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.978 [2024-07-21 01:28:05.274564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:18:19.978 [2024-07-21 01:28:05.274574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.237 [2024-07-21 01:28:05.287689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.237 [2024-07-21 01:28:05.287722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:20.237 [2024-07-21 01:28:05.287741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.075 ms 00:18:20.237 [2024-07-21 01:28:05.287751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.237 [2024-07-21 01:28:05.289626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.237 [2024-07-21 01:28:05.289659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:20.237 [2024-07-21 01:28:05.289672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:18:20.237 [2024-07-21 01:28:05.289682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.237 [2024-07-21 01:28:05.291179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.237 [2024-07-21 01:28:05.291208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:20.237 [2024-07-21 01:28:05.291219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:18:20.237 [2024-07-21 01:28:05.291229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.237 [2024-07-21 01:28:05.291533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.291551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.238 [2024-07-21 01:28:05.291563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:20.238 [2024-07-21 01:28:05.291574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.321530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.321580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:20.238 [2024-07-21 01:28:05.321597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.977 ms 00:18:20.238 [2024-07-21 01:28:05.321608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.327567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:20.238 [2024-07-21 01:28:05.350821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.350869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.238 [2024-07-21 01:28:05.350895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.171 ms 00:18:20.238 [2024-07-21 01:28:05.350906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.351000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.351013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:20.238 [2024-07-21 01:28:05.351030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:20.238 [2024-07-21 01:28:05.351041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.351103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.351114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.238 [2024-07-21 01:28:05.351125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:20.238 [2024-07-21 01:28:05.351134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.351159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.351170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.238 [2024-07-21 01:28:05.351180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:20.238 [2024-07-21 01:28:05.351192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.351231] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:20.238 [2024-07-21 01:28:05.351243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.351260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:20.238 [2024-07-21 01:28:05.351278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:20.238 [2024-07-21 01:28:05.351288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.356156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.356190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.238 [2024-07-21 01:28:05.356204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:18:20.238 [2024-07-21 01:28:05.356221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.356302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.238 [2024-07-21 01:28:05.356315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.238 [2024-07-21 01:28:05.356327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:20.238 [2024-07-21 01:28:05.356338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.238 [2024-07-21 01:28:05.357558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.238 [2024-07-21 01:28:05.358521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 169.259 ms, result 0 00:18:20.238 [2024-07-21 01:28:05.359487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.238 [2024-07-21 01:28:05.367672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.498  Copying: 4096/4096 [kB] (average 21 MBps)[2024-07-21 01:28:05.554393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.498 [2024-07-21 01:28:05.555373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.555526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.498 [2024-07-21 01:28:05.555608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:20.498 [2024-07-21 01:28:05.555645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.555693] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:20.498 [2024-07-21 01:28:05.556924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.557034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.498 [2024-07-21 01:28:05.557063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:18:20.498 [2024-07-21 01:28:05.557074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.559065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.559217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.498 [2024-07-21 01:28:05.559300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:18:20.498 [2024-07-21 01:28:05.559335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.562675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.562798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.498 [2024-07-21 01:28:05.562881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:18:20.498 [2024-07-21 01:28:05.562936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.568494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.568610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.498 [2024-07-21 01:28:05.568778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.390 ms 00:18:20.498 [2024-07-21 01:28:05.568820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.570336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.570456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.498 [2024-07-21 01:28:05.570530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:18:20.498 [2024-07-21 01:28:05.570563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.575349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.575470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.498 [2024-07-21 01:28:05.575541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:18:20.498 [2024-07-21 01:28:05.575600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.575866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.576004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.498 [2024-07-21 01:28:05.576089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:20.498 [2024-07-21 01:28:05.576123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.578369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.578505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:20.498 [2024-07-21 01:28:05.578573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:18:20.498 [2024-07-21 01:28:05.578605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.580317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.580449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:20.498 [2024-07-21 01:28:05.580514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:18:20.498 [2024-07-21 01:28:05.580527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.581808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.581851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.498 [2024-07-21 01:28:05.581863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:18:20.498 [2024-07-21 01:28:05.581872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.583129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.498 [2024-07-21 01:28:05.583161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.498 [2024-07-21 01:28:05.583171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:18:20.498 [2024-07-21 01:28:05.583181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.498 [2024-07-21 01:28:05.583208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.498 [2024-07-21 01:28:05.583225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.498 [2024-07-21 01:28:05.583680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.583991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.499 [2024-07-21 01:28:05.584298] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.499 [2024-07-21 01:28:05.584308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:20.499 [2024-07-21 01:28:05.584331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.499 [2024-07-21 01:28:05.584341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.499 [2024-07-21 01:28:05.584350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.499 [2024-07-21 01:28:05.584361] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.499 [2024-07-21 01:28:05.584378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.499 [2024-07-21 01:28:05.584392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.499 [2024-07-21 01:28:05.584402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.499 [2024-07-21 01:28:05.584421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.499 [2024-07-21 01:28:05.584429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.499 [2024-07-21 01:28:05.584438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.499 [2024-07-21 01:28:05.584449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.499 [2024-07-21 01:28:05.584460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:18:20.499 [2024-07-21 01:28:05.584474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.586678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.499 [2024-07-21 01:28:05.586699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.499 [2024-07-21 01:28:05.586710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:18:20.499 [2024-07-21 01:28:05.586725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.586889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.499 [2024-07-21 01:28:05.586901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.499 [2024-07-21 01:28:05.586912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:20.499 [2024-07-21 01:28:05.586921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.596177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.596293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.499 [2024-07-21 01:28:05.596411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.596448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.596536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.596569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.499 [2024-07-21 01:28:05.596609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.596646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.596791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.596843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.499 [2024-07-21 01:28:05.596877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.596913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.596963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.597046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.499 [2024-07-21 01:28:05.597082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.597111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.614292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.614440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.499 [2024-07-21 01:28:05.614555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.614600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.627179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.627307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.499 [2024-07-21 01:28:05.627380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.499 [2024-07-21 01:28:05.627415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.499 [2024-07-21 01:28:05.627480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.499 [2024-07-21 01:28:05.627529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.499 [2024-07-21 01:28:05.627559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.627588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.627646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.500 [2024-07-21 01:28:05.627734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.500 [2024-07-21 01:28:05.627769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.627799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.627928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.500 [2024-07-21 01:28:05.627967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.500 [2024-07-21 01:28:05.628047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.628081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.628147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.500 [2024-07-21 01:28:05.628188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.500 [2024-07-21 01:28:05.628219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.628295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.628369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.500 [2024-07-21 01:28:05.628400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.500 [2024-07-21 01:28:05.628430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.628459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.628651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.500 [2024-07-21 01:28:05.628713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.500 [2024-07-21 01:28:05.628744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.500 [2024-07-21 01:28:05.628774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.500 [2024-07-21 01:28:05.629084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.778 ms, result 0 00:18:20.759 00:18:20.759 00:18:20.759 01:28:05 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89854 00:18:20.759 01:28:05 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:20.759 01:28:05 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89854 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89854 ']' 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:20.759 01:28:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:21.019 [2024-07-21 01:28:06.089952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:21.019 [2024-07-21 01:28:06.090079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89854 ] 00:18:21.019 [2024-07-21 01:28:06.258284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.019 [2024-07-21 01:28:06.322531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.585 01:28:06 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:21.585 01:28:06 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:18:21.585 01:28:06 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:21.844 [2024-07-21 01:28:07.058718] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.844 [2024-07-21 01:28:07.058783] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.104 [2024-07-21 01:28:07.229820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.229881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.104 [2024-07-21 01:28:07.229899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.104 [2024-07-21 01:28:07.229910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.232386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.232426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.104 [2024-07-21 01:28:07.232444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:18:22.104 [2024-07-21 01:28:07.232453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.232532] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.104 [2024-07-21 01:28:07.232774] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.104 [2024-07-21 01:28:07.232795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.232813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.104 [2024-07-21 01:28:07.232845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:22.104 [2024-07-21 01:28:07.232856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.235342] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.104 [2024-07-21 01:28:07.238817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.238866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.104 [2024-07-21 01:28:07.238879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:18:22.104 [2024-07-21 01:28:07.238893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.238961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.238977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.104 [2024-07-21 01:28:07.238988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:22.104 [2024-07-21 01:28:07.239003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.251063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.251093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.104 [2024-07-21 01:28:07.251105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.021 ms 00:18:22.104 [2024-07-21 01:28:07.251120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.251238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.251255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.104 [2024-07-21 01:28:07.251266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:22.104 [2024-07-21 01:28:07.251283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.251312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.251325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.104 [2024-07-21 01:28:07.251335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.104 [2024-07-21 01:28:07.251347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.251371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.104 [2024-07-21 01:28:07.254050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.254075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.104 [2024-07-21 01:28:07.254092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:18:22.104 [2024-07-21 01:28:07.254113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.254152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.254163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.104 [2024-07-21 01:28:07.254176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:22.104 [2024-07-21 01:28:07.254185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.254210] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.104 [2024-07-21 01:28:07.254235] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.104 [2024-07-21 01:28:07.254279] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.104 [2024-07-21 01:28:07.254301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:22.104 [2024-07-21 01:28:07.254387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.104 [2024-07-21 01:28:07.254405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.104 [2024-07-21 01:28:07.254420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:22.104 [2024-07-21 01:28:07.254432] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.104 [2024-07-21 01:28:07.254448] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.104 [2024-07-21 01:28:07.254459] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.104 [2024-07-21 01:28:07.254475] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.104 [2024-07-21 01:28:07.254484] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.104 [2024-07-21 01:28:07.254498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.104 [2024-07-21 01:28:07.254510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.254523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.104 [2024-07-21 01:28:07.254533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:18:22.104 [2024-07-21 01:28:07.254545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.254611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.104 [2024-07-21 01:28:07.254624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.104 [2024-07-21 01:28:07.254633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:22.104 [2024-07-21 01:28:07.254653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.104 [2024-07-21 01:28:07.254733] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.104 [2024-07-21 01:28:07.254751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.104 [2024-07-21 01:28:07.254761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.104 [2024-07-21 01:28:07.254775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.254785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.104 [2024-07-21 01:28:07.254800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.254809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.104 [2024-07-21 01:28:07.254821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.104 [2024-07-21 01:28:07.254848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.104 [2024-07-21 01:28:07.254877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.104 [2024-07-21 01:28:07.254886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.104 [2024-07-21 01:28:07.254900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.104 [2024-07-21 01:28:07.254910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.104 [2024-07-21 01:28:07.254922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.104 [2024-07-21 01:28:07.254932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:22.104 [2024-07-21 01:28:07.254944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.254954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.104 [2024-07-21 01:28:07.254965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:22.104 [2024-07-21 01:28:07.254974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.254996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.104 [2024-07-21 01:28:07.255006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.104 [2024-07-21 01:28:07.255042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.104 [2024-07-21 01:28:07.255073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.104 [2024-07-21 01:28:07.255117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.104 [2024-07-21 01:28:07.255147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.104 [2024-07-21 01:28:07.255168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.104 [2024-07-21 01:28:07.255180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:22.104 [2024-07-21 01:28:07.255189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.104 [2024-07-21 01:28:07.255205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.104 [2024-07-21 01:28:07.255215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:22.104 [2024-07-21 01:28:07.255226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.104 [2024-07-21 01:28:07.255247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:22.104 [2024-07-21 01:28:07.255256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255269] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.104 [2024-07-21 01:28:07.255281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.104 [2024-07-21 01:28:07.255294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.104 [2024-07-21 01:28:07.255315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.104 [2024-07-21 01:28:07.255324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.104 [2024-07-21 01:28:07.255336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.104 [2024-07-21 01:28:07.255346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.104 [2024-07-21 01:28:07.255358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.104 [2024-07-21 01:28:07.255367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.104 [2024-07-21 01:28:07.255398] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.104 [2024-07-21 01:28:07.255418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.104 [2024-07-21 01:28:07.255433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.104 [2024-07-21 01:28:07.255443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:22.104 [2024-07-21 01:28:07.255457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.104 [2024-07-21 01:28:07.255467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:22.104 [2024-07-21 01:28:07.255480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:22.104 [2024-07-21 01:28:07.255490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:22.104 [2024-07-21 01:28:07.255504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:22.104 [2024-07-21 01:28:07.255517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:22.104 [2024-07-21 01:28:07.255531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:22.104 [2024-07-21 01:28:07.255541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:22.104 [2024-07-21 01:28:07.255553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:22.104 [2024-07-21 01:28:07.255563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:22.104 [2024-07-21 01:28:07.255576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:22.104 [2024-07-21 01:28:07.255587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:22.104 [2024-07-21 01:28:07.255603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.104 [2024-07-21 01:28:07.255617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.105 [2024-07-21 01:28:07.255630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.105 [2024-07-21 01:28:07.255640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.105 [2024-07-21 01:28:07.255653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.105 [2024-07-21 01:28:07.255663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.105 [2024-07-21 01:28:07.255679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.255698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.105 [2024-07-21 01:28:07.255719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:18:22.105 [2024-07-21 01:28:07.255729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.276819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.276869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.105 [2024-07-21 01:28:07.276893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.046 ms 00:18:22.105 [2024-07-21 01:28:07.276903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.277015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.277031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.105 [2024-07-21 01:28:07.277047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:22.105 [2024-07-21 01:28:07.277064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.294219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.294251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.105 [2024-07-21 01:28:07.294267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.157 ms 00:18:22.105 [2024-07-21 01:28:07.294277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.294348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.294360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.105 [2024-07-21 01:28:07.294382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.105 [2024-07-21 01:28:07.294392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.295151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.295166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.105 [2024-07-21 01:28:07.295180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:18:22.105 [2024-07-21 01:28:07.295190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.295318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.295340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.105 [2024-07-21 01:28:07.295357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:22.105 [2024-07-21 01:28:07.295374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.306870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.306899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.105 [2024-07-21 01:28:07.306915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.489 ms 00:18:22.105 [2024-07-21 01:28:07.306925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.310591] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:22.105 [2024-07-21 01:28:07.310625] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.105 [2024-07-21 01:28:07.310643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.310653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.105 [2024-07-21 01:28:07.310666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:18:22.105 [2024-07-21 01:28:07.310676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.323568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.323611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.105 [2024-07-21 01:28:07.323635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.861 ms 00:18:22.105 [2024-07-21 01:28:07.323645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.325693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.325727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.105 [2024-07-21 01:28:07.325743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:18:22.105 [2024-07-21 01:28:07.325753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.327526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.327557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.105 [2024-07-21 01:28:07.327571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:18:22.105 [2024-07-21 01:28:07.327580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.327903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.327921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.105 [2024-07-21 01:28:07.327934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:22.105 [2024-07-21 01:28:07.327944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.376882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.376958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.105 [2024-07-21 01:28:07.376989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.968 ms 00:18:22.105 [2024-07-21 01:28:07.377006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.384020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.105 [2024-07-21 01:28:07.407615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.407661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.105 [2024-07-21 01:28:07.407677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.527 ms 00:18:22.105 [2024-07-21 01:28:07.407690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.407775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.407792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.105 [2024-07-21 01:28:07.407807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:22.105 [2024-07-21 01:28:07.407820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.407907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.407921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.105 [2024-07-21 01:28:07.407932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:22.105 [2024-07-21 01:28:07.407945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.407979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.407996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.105 [2024-07-21 01:28:07.408005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.105 [2024-07-21 01:28:07.408021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.105 [2024-07-21 01:28:07.408067] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.105 [2024-07-21 01:28:07.408082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.105 [2024-07-21 01:28:07.408093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.105 [2024-07-21 01:28:07.408106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:22.105 [2024-07-21 01:28:07.408116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.363 [2024-07-21 01:28:07.413045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.363 [2024-07-21 01:28:07.413079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.363 [2024-07-21 01:28:07.413095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.908 ms 00:18:22.363 [2024-07-21 01:28:07.413105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.363 [2024-07-21 01:28:07.413190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.363 [2024-07-21 01:28:07.413203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.363 [2024-07-21 01:28:07.413216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:22.363 [2024-07-21 01:28:07.413227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.363 [2024-07-21 01:28:07.414503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.363 [2024-07-21 01:28:07.415513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 184.635 ms, result 0 00:18:22.363 [2024-07-21 01:28:07.416685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.363 Some configs were skipped because the RPC state that can call them passed over. 00:18:22.363 01:28:07 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:22.363 [2024-07-21 01:28:07.629499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.363 [2024-07-21 01:28:07.629727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:22.363 [2024-07-21 01:28:07.629803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:18:22.363 [2024-07-21 01:28:07.629878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.363 [2024-07-21 01:28:07.629943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.982 ms, result 0 00:18:22.363 true 00:18:22.363 01:28:07 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:22.621 [2024-07-21 01:28:07.813224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.621 [2024-07-21 01:28:07.813365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:22.621 [2024-07-21 01:28:07.813445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:18:22.621 [2024-07-21 01:28:07.813508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.621 [2024-07-21 01:28:07.813583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.626 ms, result 0 00:18:22.621 true 00:18:22.621 01:28:07 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89854 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89854 ']' 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89854 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89854 00:18:22.621 killing process with pid 89854 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89854' 00:18:22.621 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89854 00:18:22.622 01:28:07 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89854 00:18:22.881 [2024-07-21 01:28:08.097347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.097415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:22.881 [2024-07-21 01:28:08.097432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.881 [2024-07-21 01:28:08.097444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.097470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:22.881 [2024-07-21 01:28:08.098658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.098683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:22.881 [2024-07-21 01:28:08.098696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:18:22.881 [2024-07-21 01:28:08.098707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.098973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.098987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:22.881 [2024-07-21 01:28:08.098999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:22.881 [2024-07-21 01:28:08.099009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.102239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.102277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:22.881 [2024-07-21 01:28:08.102302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:18:22.881 [2024-07-21 01:28:08.102313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.107642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.107694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:22.881 [2024-07-21 01:28:08.107711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.282 ms 00:18:22.881 [2024-07-21 01:28:08.107721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.109354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.109388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:22.881 [2024-07-21 01:28:08.109401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:18:22.881 [2024-07-21 01:28:08.109411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.114463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.114495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:22.881 [2024-07-21 01:28:08.114509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:18:22.881 [2024-07-21 01:28:08.114519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.114643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.114655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:22.881 [2024-07-21 01:28:08.114668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:22.881 [2024-07-21 01:28:08.114678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.117052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.117082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:22.881 [2024-07-21 01:28:08.117097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:18:22.881 [2024-07-21 01:28:08.117106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.118979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.119012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:22.881 [2024-07-21 01:28:08.119027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:18:22.881 [2024-07-21 01:28:08.119036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.120356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.120388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:22.881 [2024-07-21 01:28:08.120402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:18:22.881 [2024-07-21 01:28:08.120411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.121695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.121729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:22.881 [2024-07-21 01:28:08.121744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:18:22.881 [2024-07-21 01:28:08.121754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.121789] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:22.881 [2024-07-21 01:28:08.121807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.121997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.122995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:22.881 [2024-07-21 01:28:08.123091] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:22.881 [2024-07-21 01:28:08.123113] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:22.881 [2024-07-21 01:28:08.123124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:22.881 [2024-07-21 01:28:08.123137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:22.881 [2024-07-21 01:28:08.123149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:22.881 [2024-07-21 01:28:08.123162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:22.881 [2024-07-21 01:28:08.123171] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:22.881 [2024-07-21 01:28:08.123183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:22.881 [2024-07-21 01:28:08.123193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:22.881 [2024-07-21 01:28:08.123205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:22.881 [2024-07-21 01:28:08.123214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:22.881 [2024-07-21 01:28:08.123225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.123235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:22.881 [2024-07-21 01:28:08.123248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:18:22.881 [2024-07-21 01:28:08.123257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.125846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.125869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:22.881 [2024-07-21 01:28:08.125893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:18:22.881 [2024-07-21 01:28:08.125903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.126084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.881 [2024-07-21 01:28:08.126096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:22.881 [2024-07-21 01:28:08.126117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:22.881 [2024-07-21 01:28:08.126126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.136736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.136905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.881 [2024-07-21 01:28:08.137083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.137127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.137238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.137382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.881 [2024-07-21 01:28:08.137439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.137470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.137554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.137595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.881 [2024-07-21 01:28:08.137629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.137658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.137773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.137812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.881 [2024-07-21 01:28:08.137863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.137895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.158500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.158679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.881 [2024-07-21 01:28:08.158706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.158717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.172325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.172359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.881 [2024-07-21 01:28:08.172375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.172394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.172460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.172471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.881 [2024-07-21 01:28:08.172490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.172500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.172538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.172549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.881 [2024-07-21 01:28:08.172562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.881 [2024-07-21 01:28:08.172571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.881 [2024-07-21 01:28:08.172683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.881 [2024-07-21 01:28:08.172697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.881 [2024-07-21 01:28:08.172710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.882 [2024-07-21 01:28:08.172723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.882 [2024-07-21 01:28:08.172772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.882 [2024-07-21 01:28:08.172785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:22.882 [2024-07-21 01:28:08.172799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.882 [2024-07-21 01:28:08.172809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.882 [2024-07-21 01:28:08.172874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.882 [2024-07-21 01:28:08.172896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.882 [2024-07-21 01:28:08.172909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.882 [2024-07-21 01:28:08.172923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.882 [2024-07-21 01:28:08.172997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:22.882 [2024-07-21 01:28:08.173009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.882 [2024-07-21 01:28:08.173023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:22.882 [2024-07-21 01:28:08.173033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.882 [2024-07-21 01:28:08.173205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.943 ms, result 0 00:18:23.449 01:28:08 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.449 [2024-07-21 01:28:08.616194] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:23.449 [2024-07-21 01:28:08.616316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89896 ] 00:18:23.708 [2024-07-21 01:28:08.784135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.708 [2024-07-21 01:28:08.847991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.708 [2024-07-21 01:28:08.992623] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.708 [2024-07-21 01:28:08.992708] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.968 [2024-07-21 01:28:09.147086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.147152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.968 [2024-07-21 01:28:09.147175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:23.968 [2024-07-21 01:28:09.147186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.149872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.149908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.968 [2024-07-21 01:28:09.149921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:18:23.968 [2024-07-21 01:28:09.149931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.150014] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.968 [2024-07-21 01:28:09.150275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.968 [2024-07-21 01:28:09.150299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.150311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.968 [2024-07-21 01:28:09.150326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:23.968 [2024-07-21 01:28:09.150335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.152683] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:23.968 [2024-07-21 01:28:09.156149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.156179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:23.968 [2024-07-21 01:28:09.156192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:18:23.968 [2024-07-21 01:28:09.156202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.156278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.156291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:23.968 [2024-07-21 01:28:09.156313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:23.968 [2024-07-21 01:28:09.156323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.168294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.168324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.968 [2024-07-21 01:28:09.168341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.940 ms 00:18:23.968 [2024-07-21 01:28:09.168351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.168495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.168516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.968 [2024-07-21 01:28:09.168534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:23.968 [2024-07-21 01:28:09.168555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.168588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.168599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.968 [2024-07-21 01:28:09.168610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:23.968 [2024-07-21 01:28:09.168627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.168656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:23.968 [2024-07-21 01:28:09.171267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.171291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.968 [2024-07-21 01:28:09.171307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.621 ms 00:18:23.968 [2024-07-21 01:28:09.171325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.171364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.171375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.968 [2024-07-21 01:28:09.171385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:23.968 [2024-07-21 01:28:09.171394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.171414] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:23.968 [2024-07-21 01:28:09.171443] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:23.968 [2024-07-21 01:28:09.171478] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:23.968 [2024-07-21 01:28:09.171498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:23.968 [2024-07-21 01:28:09.171579] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.968 [2024-07-21 01:28:09.171601] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.968 [2024-07-21 01:28:09.171621] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:23.968 [2024-07-21 01:28:09.171634] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.968 [2024-07-21 01:28:09.171646] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.968 [2024-07-21 01:28:09.171658] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:23.968 [2024-07-21 01:28:09.171668] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.968 [2024-07-21 01:28:09.171677] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.968 [2024-07-21 01:28:09.171697] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.968 [2024-07-21 01:28:09.171707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.171717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.968 [2024-07-21 01:28:09.171728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:23.968 [2024-07-21 01:28:09.171737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.171808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.968 [2024-07-21 01:28:09.171819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.968 [2024-07-21 01:28:09.171845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:23.968 [2024-07-21 01:28:09.171854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.968 [2024-07-21 01:28:09.171962] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.968 [2024-07-21 01:28:09.171985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.968 [2024-07-21 01:28:09.171997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.968 [2024-07-21 01:28:09.172027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.968 [2024-07-21 01:28:09.172056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.968 [2024-07-21 01:28:09.172086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.968 [2024-07-21 01:28:09.172096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:23.968 [2024-07-21 01:28:09.172110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.968 [2024-07-21 01:28:09.172120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.968 [2024-07-21 01:28:09.172130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:23.968 [2024-07-21 01:28:09.172139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.968 [2024-07-21 01:28:09.172158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.968 [2024-07-21 01:28:09.172186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.968 [2024-07-21 01:28:09.172215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.968 [2024-07-21 01:28:09.172241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.968 [2024-07-21 01:28:09.172275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.968 [2024-07-21 01:28:09.172302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172311] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.968 [2024-07-21 01:28:09.172320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.968 [2024-07-21 01:28:09.172329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:23.968 [2024-07-21 01:28:09.172338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.968 [2024-07-21 01:28:09.172347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.968 [2024-07-21 01:28:09.172356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:23.968 [2024-07-21 01:28:09.172365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.968 [2024-07-21 01:28:09.172383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:23.968 [2024-07-21 01:28:09.172392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172401] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.968 [2024-07-21 01:28:09.172422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.968 [2024-07-21 01:28:09.172439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.968 [2024-07-21 01:28:09.172459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.968 [2024-07-21 01:28:09.172468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.968 [2024-07-21 01:28:09.172478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.968 [2024-07-21 01:28:09.172487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.968 [2024-07-21 01:28:09.172496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.968 [2024-07-21 01:28:09.172505] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.968 [2024-07-21 01:28:09.172516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.968 [2024-07-21 01:28:09.172529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.968 [2024-07-21 01:28:09.172555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:23.968 [2024-07-21 01:28:09.172566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:23.968 [2024-07-21 01:28:09.172576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:23.968 [2024-07-21 01:28:09.172585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:23.968 [2024-07-21 01:28:09.172595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:23.968 [2024-07-21 01:28:09.172608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:23.969 [2024-07-21 01:28:09.172618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:23.969 [2024-07-21 01:28:09.172628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:23.969 [2024-07-21 01:28:09.172646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:23.969 [2024-07-21 01:28:09.172655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:23.969 [2024-07-21 01:28:09.172724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.969 [2024-07-21 01:28:09.172735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.969 [2024-07-21 01:28:09.172759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.969 [2024-07-21 01:28:09.172769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.969 [2024-07-21 01:28:09.172780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.969 [2024-07-21 01:28:09.172792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.172806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.969 [2024-07-21 01:28:09.172824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:18:23.969 [2024-07-21 01:28:09.172834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.211328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.211417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.969 [2024-07-21 01:28:09.211459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.460 ms 00:18:23.969 [2024-07-21 01:28:09.211499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.211877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.211918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:23.969 [2024-07-21 01:28:09.211976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:23.969 [2024-07-21 01:28:09.212006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.231876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.231915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.969 [2024-07-21 01:28:09.231933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.816 ms 00:18:23.969 [2024-07-21 01:28:09.231961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.232043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.232060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.969 [2024-07-21 01:28:09.232075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:23.969 [2024-07-21 01:28:09.232088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.232904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.232928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.969 [2024-07-21 01:28:09.232944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:18:23.969 [2024-07-21 01:28:09.232967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.233131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.233162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.969 [2024-07-21 01:28:09.233176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:18:23.969 [2024-07-21 01:28:09.233190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.243980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.244009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.969 [2024-07-21 01:28:09.244021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.778 ms 00:18:23.969 [2024-07-21 01:28:09.244032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.247843] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:23.969 [2024-07-21 01:28:09.247875] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:23.969 [2024-07-21 01:28:09.247902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.247913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:23.969 [2024-07-21 01:28:09.247924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.761 ms 00:18:23.969 [2024-07-21 01:28:09.247934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.261203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.261240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:23.969 [2024-07-21 01:28:09.261260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.240 ms 00:18:23.969 [2024-07-21 01:28:09.261285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.263335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.263371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:23.969 [2024-07-21 01:28:09.263384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:18:23.969 [2024-07-21 01:28:09.263395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.264960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.264991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:23.969 [2024-07-21 01:28:09.265002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:18:23.969 [2024-07-21 01:28:09.265013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.969 [2024-07-21 01:28:09.265302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.969 [2024-07-21 01:28:09.265332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:23.969 [2024-07-21 01:28:09.265344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:18:23.969 [2024-07-21 01:28:09.265355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.227 [2024-07-21 01:28:09.294854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.294904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:24.228 [2024-07-21 01:28:09.294921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.504 ms 00:18:24.228 [2024-07-21 01:28:09.294932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.301163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:24.228 [2024-07-21 01:28:09.325409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.325451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.228 [2024-07-21 01:28:09.325467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.433 ms 00:18:24.228 [2024-07-21 01:28:09.325478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.325573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.325586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:24.228 [2024-07-21 01:28:09.325612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:24.228 [2024-07-21 01:28:09.325623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.325690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.325702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.228 [2024-07-21 01:28:09.325713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:24.228 [2024-07-21 01:28:09.325724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.325761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.325784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.228 [2024-07-21 01:28:09.325794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:24.228 [2024-07-21 01:28:09.325809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.326070] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:24.228 [2024-07-21 01:28:09.326123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.326155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:24.228 [2024-07-21 01:28:09.326186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:24.228 [2024-07-21 01:28:09.326228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.331080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.331221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.228 [2024-07-21 01:28:09.331296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:18:24.228 [2024-07-21 01:28:09.331327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.331505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.228 [2024-07-21 01:28:09.331521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.228 [2024-07-21 01:28:09.331533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:24.228 [2024-07-21 01:28:09.331544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.228 [2024-07-21 01:28:09.332875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.228 [2024-07-21 01:28:09.333881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 185.745 ms, result 0 00:18:24.228 [2024-07-21 01:28:09.334714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:24.228 [2024-07-21 01:28:09.342527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.145  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (22 MBps) Copying: 72/256 [MB] (23 MBps) Copying: 95/256 [MB] (23 MBps) Copying: 119/256 [MB] (23 MBps) Copying: 143/256 [MB] (24 MBps) Copying: 167/256 [MB] (23 MBps) Copying: 191/256 [MB] (24 MBps) Copying: 213/256 [MB] (22 MBps) Copying: 237/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-21 01:28:20.449952] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.145 [2024-07-21 01:28:20.452962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.145 [2024-07-21 01:28:20.453051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.145 [2024-07-21 01:28:20.453097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.145 [2024-07-21 01:28:20.453131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.145 [2024-07-21 01:28:20.453200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.145 [2024-07-21 01:28:20.455207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.145 [2024-07-21 01:28:20.455252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.145 [2024-07-21 01:28:20.455277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:18:35.145 [2024-07-21 01:28:20.455299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.456516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.456579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.405 [2024-07-21 01:28:20.456619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:18:35.405 [2024-07-21 01:28:20.456657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.463732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.463781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.405 [2024-07-21 01:28:20.463807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.044 ms 00:18:35.405 [2024-07-21 01:28:20.463845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.472869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.472906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.405 [2024-07-21 01:28:20.472947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.957 ms 00:18:35.405 [2024-07-21 01:28:20.472963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.474710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.474745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.405 [2024-07-21 01:28:20.474757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:18:35.405 [2024-07-21 01:28:20.474768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.479684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.479723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.405 [2024-07-21 01:28:20.479736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.890 ms 00:18:35.405 [2024-07-21 01:28:20.479747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.479878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.479894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.405 [2024-07-21 01:28:20.479925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:35.405 [2024-07-21 01:28:20.479936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.405 [2024-07-21 01:28:20.482418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.405 [2024-07-21 01:28:20.482451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:35.405 [2024-07-21 01:28:20.482464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:18:35.406 [2024-07-21 01:28:20.482475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.406 [2024-07-21 01:28:20.484224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.406 [2024-07-21 01:28:20.484256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:35.406 [2024-07-21 01:28:20.484267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:18:35.406 [2024-07-21 01:28:20.484277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.406 [2024-07-21 01:28:20.485604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.406 [2024-07-21 01:28:20.485634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.406 [2024-07-21 01:28:20.485645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:18:35.406 [2024-07-21 01:28:20.485653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.406 [2024-07-21 01:28:20.486874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.406 [2024-07-21 01:28:20.486904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.406 [2024-07-21 01:28:20.486915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:18:35.406 [2024-07-21 01:28:20.486925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.406 [2024-07-21 01:28:20.486953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.406 [2024-07-21 01:28:20.486972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.486985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.486997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.487992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.488002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.406 [2024-07-21 01:28:20.488012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.407 [2024-07-21 01:28:20.488022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.407 [2024-07-21 01:28:20.488032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.407 [2024-07-21 01:28:20.488050] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.407 [2024-07-21 01:28:20.488061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fb64116f-0918-47a1-8bee-b098e1f13b9f 00:18:35.407 [2024-07-21 01:28:20.488071] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.407 [2024-07-21 01:28:20.488082] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.407 [2024-07-21 01:28:20.488091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.407 [2024-07-21 01:28:20.488102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.407 [2024-07-21 01:28:20.488111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.407 [2024-07-21 01:28:20.488128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.407 [2024-07-21 01:28:20.488137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.407 [2024-07-21 01:28:20.488156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.407 [2024-07-21 01:28:20.488166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.407 [2024-07-21 01:28:20.488176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.407 [2024-07-21 01:28:20.488187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.407 [2024-07-21 01:28:20.488198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:18:35.407 [2024-07-21 01:28:20.488212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.490804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.407 [2024-07-21 01:28:20.490833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.407 [2024-07-21 01:28:20.490845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:18:35.407 [2024-07-21 01:28:20.490861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.491018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.407 [2024-07-21 01:28:20.491029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.407 [2024-07-21 01:28:20.491040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:18:35.407 [2024-07-21 01:28:20.491049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.500594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.500621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.407 [2024-07-21 01:28:20.500643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.500653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.500716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.500727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.407 [2024-07-21 01:28:20.500738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.500747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.500793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.500805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.407 [2024-07-21 01:28:20.500816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.500842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.500861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.500871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.407 [2024-07-21 01:28:20.500881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.500890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.519036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.519067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.407 [2024-07-21 01:28:20.519079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.519094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.532547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.532580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.407 [2024-07-21 01:28:20.532593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.532603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.532651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.532663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.407 [2024-07-21 01:28:20.532701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.532711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.532748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.532759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.407 [2024-07-21 01:28:20.532770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.532780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.532885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.532900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.407 [2024-07-21 01:28:20.532911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.532922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.532960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.532975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.407 [2024-07-21 01:28:20.532986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.532996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.533046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.533058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.407 [2024-07-21 01:28:20.533068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.533079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.533133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.407 [2024-07-21 01:28:20.533153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.407 [2024-07-21 01:28:20.533164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.407 [2024-07-21 01:28:20.533175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.407 [2024-07-21 01:28:20.533342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.501 ms, result 0 00:18:35.666 00:18:35.666 00:18:35.666 01:28:20 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.233 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:36.233 01:28:21 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:36.233 01:28:21 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:36.234 01:28:21 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:36.234 01:28:21 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.234 01:28:21 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:36.234 01:28:21 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:36.234 01:28:21 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89854 00:18:36.234 01:28:21 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89854 ']' 00:18:36.234 01:28:21 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89854 00:18:36.234 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89854) - No such process 00:18:36.234 Process with pid 89854 is not found 00:18:36.234 01:28:21 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 89854 is not found' 00:18:36.234 00:18:36.234 real 0m58.230s 00:18:36.234 user 1m18.759s 00:18:36.234 sys 0m7.043s 00:18:36.234 01:28:21 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:36.234 01:28:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:36.234 ************************************ 00:18:36.234 END TEST ftl_trim 00:18:36.234 ************************************ 00:18:36.234 01:28:21 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.234 01:28:21 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:36.234 01:28:21 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:36.234 01:28:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.234 ************************************ 00:18:36.234 START TEST ftl_restore 00:18:36.234 ************************************ 00:18:36.234 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:36.493 * Looking for test storage... 00:18:36.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6nSdGBODp2 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90091 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90091 00:18:36.493 01:28:21 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 90091 ']' 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:36.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:36.493 01:28:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:36.493 [2024-07-21 01:28:21.780350] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:36.493 [2024-07-21 01:28:21.780474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90091 ] 00:18:36.752 [2024-07-21 01:28:21.949040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.752 [2024-07-21 01:28:22.012555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.318 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:37.318 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:37.318 01:28:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:37.576 01:28:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:37.576 01:28:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:37.576 01:28:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:37.576 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:18:37.576 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:37.576 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:37.576 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:37.576 01:28:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:37.834 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:37.834 { 00:18:37.834 "name": "nvme0n1", 00:18:37.834 "aliases": [ 00:18:37.834 "cef8efd2-db9d-479d-aefa-3472d8a2affb" 00:18:37.834 ], 00:18:37.834 "product_name": "NVMe disk", 00:18:37.834 "block_size": 4096, 00:18:37.834 "num_blocks": 1310720, 00:18:37.834 "uuid": "cef8efd2-db9d-479d-aefa-3472d8a2affb", 00:18:37.834 "assigned_rate_limits": { 00:18:37.834 "rw_ios_per_sec": 0, 00:18:37.834 "rw_mbytes_per_sec": 0, 00:18:37.834 "r_mbytes_per_sec": 0, 00:18:37.834 "w_mbytes_per_sec": 0 00:18:37.834 }, 00:18:37.834 "claimed": true, 00:18:37.834 "claim_type": "read_many_write_one", 00:18:37.834 "zoned": false, 00:18:37.834 "supported_io_types": { 00:18:37.834 "read": true, 00:18:37.834 "write": true, 00:18:37.834 "unmap": true, 00:18:37.834 "write_zeroes": true, 00:18:37.834 "flush": true, 00:18:37.834 "reset": true, 00:18:37.834 "compare": true, 00:18:37.834 "compare_and_write": false, 00:18:37.834 "abort": true, 00:18:37.834 "nvme_admin": true, 00:18:37.834 "nvme_io": true 00:18:37.834 }, 00:18:37.834 "driver_specific": { 00:18:37.834 "nvme": [ 00:18:37.834 { 00:18:37.834 "pci_address": "0000:00:11.0", 00:18:37.834 "trid": { 00:18:37.834 "trtype": "PCIe", 00:18:37.834 "traddr": "0000:00:11.0" 00:18:37.834 }, 00:18:37.834 "ctrlr_data": { 00:18:37.834 "cntlid": 0, 00:18:37.834 "vendor_id": "0x1b36", 00:18:37.834 "model_number": "QEMU NVMe Ctrl", 00:18:37.834 "serial_number": "12341", 00:18:37.834 "firmware_revision": "8.0.0", 00:18:37.834 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:37.834 "oacs": { 00:18:37.834 "security": 0, 00:18:37.834 "format": 1, 00:18:37.834 "firmware": 0, 00:18:37.834 "ns_manage": 1 00:18:37.834 }, 00:18:37.834 "multi_ctrlr": false, 00:18:37.834 "ana_reporting": false 00:18:37.835 }, 00:18:37.835 "vs": { 00:18:37.835 "nvme_version": "1.4" 00:18:37.835 }, 00:18:37.835 "ns_data": { 00:18:37.835 "id": 1, 00:18:37.835 "can_share": false 00:18:37.835 } 00:18:37.835 } 00:18:37.835 ], 00:18:37.835 "mp_policy": "active_passive" 00:18:37.835 } 00:18:37.835 } 00:18:37.835 ]' 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:18:37.835 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:18:37.835 01:28:23 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:37.835 01:28:23 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:37.835 01:28:23 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:37.835 01:28:23 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:37.835 01:28:23 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.093 01:28:23 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=84088371-5326-4209-8e8f-80ab816b5246 00:18:38.093 01:28:23 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.093 01:28:23 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84088371-5326-4209-8e8f-80ab816b5246 00:18:38.351 01:28:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.351 01:28:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5b5d56f9-eca3-4ab4-ac48-f64980d3bda6 00:18:38.351 01:28:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5b5d56f9-eca3-4ab4-ac48-f64980d3bda6 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:38.609 01:28:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.609 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.609 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:38.609 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:38.609 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:38.609 01:28:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:38.868 { 00:18:38.868 "name": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:38.868 "aliases": [ 00:18:38.868 "lvs/nvme0n1p0" 00:18:38.868 ], 00:18:38.868 "product_name": "Logical Volume", 00:18:38.868 "block_size": 4096, 00:18:38.868 "num_blocks": 26476544, 00:18:38.868 "uuid": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:38.868 "assigned_rate_limits": { 00:18:38.868 "rw_ios_per_sec": 0, 00:18:38.868 "rw_mbytes_per_sec": 0, 00:18:38.868 "r_mbytes_per_sec": 0, 00:18:38.868 "w_mbytes_per_sec": 0 00:18:38.868 }, 00:18:38.868 "claimed": false, 00:18:38.868 "zoned": false, 00:18:38.868 "supported_io_types": { 00:18:38.868 "read": true, 00:18:38.868 "write": true, 00:18:38.868 "unmap": true, 00:18:38.868 "write_zeroes": true, 00:18:38.868 "flush": false, 00:18:38.868 "reset": true, 00:18:38.868 "compare": false, 00:18:38.868 "compare_and_write": false, 00:18:38.868 "abort": false, 00:18:38.868 "nvme_admin": false, 00:18:38.868 "nvme_io": false 00:18:38.868 }, 00:18:38.868 "driver_specific": { 00:18:38.868 "lvol": { 00:18:38.868 "lvol_store_uuid": "5b5d56f9-eca3-4ab4-ac48-f64980d3bda6", 00:18:38.868 "base_bdev": "nvme0n1", 00:18:38.868 "thin_provision": true, 00:18:38.868 "num_allocated_clusters": 0, 00:18:38.868 "snapshot": false, 00:18:38.868 "clone": false, 00:18:38.868 "esnap_clone": false 00:18:38.868 } 00:18:38.868 } 00:18:38.868 } 00:18:38.868 ]' 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:38.868 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:38.868 01:28:24 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:38.868 01:28:24 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:38.868 01:28:24 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.126 01:28:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.126 01:28:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.126 01:28:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.126 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.126 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:39.126 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:39.126 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:39.126 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:39.384 { 00:18:39.384 "name": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:39.384 "aliases": [ 00:18:39.384 "lvs/nvme0n1p0" 00:18:39.384 ], 00:18:39.384 "product_name": "Logical Volume", 00:18:39.384 "block_size": 4096, 00:18:39.384 "num_blocks": 26476544, 00:18:39.384 "uuid": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:39.384 "assigned_rate_limits": { 00:18:39.384 "rw_ios_per_sec": 0, 00:18:39.384 "rw_mbytes_per_sec": 0, 00:18:39.384 "r_mbytes_per_sec": 0, 00:18:39.384 "w_mbytes_per_sec": 0 00:18:39.384 }, 00:18:39.384 "claimed": false, 00:18:39.384 "zoned": false, 00:18:39.384 "supported_io_types": { 00:18:39.384 "read": true, 00:18:39.384 "write": true, 00:18:39.384 "unmap": true, 00:18:39.384 "write_zeroes": true, 00:18:39.384 "flush": false, 00:18:39.384 "reset": true, 00:18:39.384 "compare": false, 00:18:39.384 "compare_and_write": false, 00:18:39.384 "abort": false, 00:18:39.384 "nvme_admin": false, 00:18:39.384 "nvme_io": false 00:18:39.384 }, 00:18:39.384 "driver_specific": { 00:18:39.384 "lvol": { 00:18:39.384 "lvol_store_uuid": "5b5d56f9-eca3-4ab4-ac48-f64980d3bda6", 00:18:39.384 "base_bdev": "nvme0n1", 00:18:39.384 "thin_provision": true, 00:18:39.384 "num_allocated_clusters": 0, 00:18:39.384 "snapshot": false, 00:18:39.384 "clone": false, 00:18:39.384 "esnap_clone": false 00:18:39.384 } 00:18:39.384 } 00:18:39.384 } 00:18:39.384 ]' 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:39.384 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:39.384 01:28:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:39.384 01:28:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:39.643 01:28:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:39.643 01:28:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.643 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.643 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:39.643 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:39.643 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:39.643 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3d9f1d7-8c8e-4286-9a1b-102d15953240 00:18:39.902 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:39.902 { 00:18:39.902 "name": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:39.902 "aliases": [ 00:18:39.902 "lvs/nvme0n1p0" 00:18:39.902 ], 00:18:39.902 "product_name": "Logical Volume", 00:18:39.902 "block_size": 4096, 00:18:39.902 "num_blocks": 26476544, 00:18:39.902 "uuid": "b3d9f1d7-8c8e-4286-9a1b-102d15953240", 00:18:39.902 "assigned_rate_limits": { 00:18:39.902 "rw_ios_per_sec": 0, 00:18:39.902 "rw_mbytes_per_sec": 0, 00:18:39.902 "r_mbytes_per_sec": 0, 00:18:39.902 "w_mbytes_per_sec": 0 00:18:39.902 }, 00:18:39.902 "claimed": false, 00:18:39.902 "zoned": false, 00:18:39.902 "supported_io_types": { 00:18:39.902 "read": true, 00:18:39.902 "write": true, 00:18:39.902 "unmap": true, 00:18:39.902 "write_zeroes": true, 00:18:39.902 "flush": false, 00:18:39.902 "reset": true, 00:18:39.902 "compare": false, 00:18:39.902 "compare_and_write": false, 00:18:39.902 "abort": false, 00:18:39.902 "nvme_admin": false, 00:18:39.902 "nvme_io": false 00:18:39.902 }, 00:18:39.902 "driver_specific": { 00:18:39.902 "lvol": { 00:18:39.902 "lvol_store_uuid": "5b5d56f9-eca3-4ab4-ac48-f64980d3bda6", 00:18:39.902 "base_bdev": "nvme0n1", 00:18:39.902 "thin_provision": true, 00:18:39.902 "num_allocated_clusters": 0, 00:18:39.902 "snapshot": false, 00:18:39.902 "clone": false, 00:18:39.902 "esnap_clone": false 00:18:39.902 } 00:18:39.902 } 00:18:39.902 } 00:18:39.902 ]' 00:18:39.902 01:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:39.902 01:28:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:39.902 01:28:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:39.902 01:28:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:39.902 01:28:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:39.902 01:28:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b3d9f1d7-8c8e-4286-9a1b-102d15953240 --l2p_dram_limit 10' 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:39.902 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:39.902 01:28:25 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b3d9f1d7-8c8e-4286-9a1b-102d15953240 --l2p_dram_limit 10 -c nvc0n1p0 00:18:40.162 [2024-07-21 01:28:25.247246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.162 [2024-07-21 01:28:25.247298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.162 [2024-07-21 01:28:25.247317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.163 [2024-07-21 01:28:25.247327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.247375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.247387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.163 [2024-07-21 01:28:25.247399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:40.163 [2024-07-21 01:28:25.247412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.247438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.163 [2024-07-21 01:28:25.247642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.163 [2024-07-21 01:28:25.247677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.247687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.163 [2024-07-21 01:28:25.247701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:40.163 [2024-07-21 01:28:25.247711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.247784] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ce280a0b-3e48-42ae-87a9-3fe790281080 00:18:40.163 [2024-07-21 01:28:25.250032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.250069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.163 [2024-07-21 01:28:25.250085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:40.163 [2024-07-21 01:28:25.250098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.263216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.263243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.163 [2024-07-21 01:28:25.263257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.090 ms 00:18:40.163 [2024-07-21 01:28:25.263270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.263355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.263379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.163 [2024-07-21 01:28:25.263390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:40.163 [2024-07-21 01:28:25.263410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.263472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.263487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.163 [2024-07-21 01:28:25.263497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.163 [2024-07-21 01:28:25.263510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.263533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.163 [2024-07-21 01:28:25.266251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.266280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.163 [2024-07-21 01:28:25.266295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:18:40.163 [2024-07-21 01:28:25.266316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.266355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.266372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.163 [2024-07-21 01:28:25.266386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.163 [2024-07-21 01:28:25.266396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.266421] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.163 [2024-07-21 01:28:25.266553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.163 [2024-07-21 01:28:25.266571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.163 [2024-07-21 01:28:25.266585] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:40.163 [2024-07-21 01:28:25.266601] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.163 [2024-07-21 01:28:25.266614] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.163 [2024-07-21 01:28:25.266627] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.163 [2024-07-21 01:28:25.266638] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.163 [2024-07-21 01:28:25.266653] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.163 [2024-07-21 01:28:25.266663] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.163 [2024-07-21 01:28:25.266675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.266686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.163 [2024-07-21 01:28:25.266708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:40.163 [2024-07-21 01:28:25.266718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.266788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.163 [2024-07-21 01:28:25.266800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.163 [2024-07-21 01:28:25.266816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:40.163 [2024-07-21 01:28:25.266844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.163 [2024-07-21 01:28:25.266941] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.163 [2024-07-21 01:28:25.266955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.163 [2024-07-21 01:28:25.266968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.163 [2024-07-21 01:28:25.266979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.266992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.163 [2024-07-21 01:28:25.267000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.163 [2024-07-21 01:28:25.267032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.163 [2024-07-21 01:28:25.267051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.163 [2024-07-21 01:28:25.267060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.163 [2024-07-21 01:28:25.267071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.163 [2024-07-21 01:28:25.267080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.163 [2024-07-21 01:28:25.267095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.163 [2024-07-21 01:28:25.267103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.163 [2024-07-21 01:28:25.267124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.163 [2024-07-21 01:28:25.267156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.163 [2024-07-21 01:28:25.267185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.163 [2024-07-21 01:28:25.267218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.163 [2024-07-21 01:28:25.267246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.163 [2024-07-21 01:28:25.267281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.163 [2024-07-21 01:28:25.267301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.163 [2024-07-21 01:28:25.267310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.163 [2024-07-21 01:28:25.267321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.163 [2024-07-21 01:28:25.267329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.163 [2024-07-21 01:28:25.267340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.163 [2024-07-21 01:28:25.267348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.163 [2024-07-21 01:28:25.267368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.163 [2024-07-21 01:28:25.267379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267387] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.163 [2024-07-21 01:28:25.267400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.163 [2024-07-21 01:28:25.267409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.163 [2024-07-21 01:28:25.267437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.163 [2024-07-21 01:28:25.267448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.163 [2024-07-21 01:28:25.267457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.163 [2024-07-21 01:28:25.267471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.163 [2024-07-21 01:28:25.267480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.163 [2024-07-21 01:28:25.267491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.163 [2024-07-21 01:28:25.267506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.163 [2024-07-21 01:28:25.267522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.163 [2024-07-21 01:28:25.267535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.163 [2024-07-21 01:28:25.267548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.164 [2024-07-21 01:28:25.267558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.164 [2024-07-21 01:28:25.267571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.164 [2024-07-21 01:28:25.267581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.164 [2024-07-21 01:28:25.267595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.164 [2024-07-21 01:28:25.267605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.164 [2024-07-21 01:28:25.267620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.164 [2024-07-21 01:28:25.267631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.164 [2024-07-21 01:28:25.267644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.164 [2024-07-21 01:28:25.267698] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.164 [2024-07-21 01:28:25.267711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.164 [2024-07-21 01:28:25.267734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.164 [2024-07-21 01:28:25.267745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.164 [2024-07-21 01:28:25.267759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.164 [2024-07-21 01:28:25.267769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.164 [2024-07-21 01:28:25.267782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.164 [2024-07-21 01:28:25.267791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:18:40.164 [2024-07-21 01:28:25.267806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.164 [2024-07-21 01:28:25.267881] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:40.164 [2024-07-21 01:28:25.267899] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:44.407 [2024-07-21 01:28:29.136816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.136915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:44.407 [2024-07-21 01:28:29.136936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3875.213 ms 00:18:44.407 [2024-07-21 01:28:29.136950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.155821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.155889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.407 [2024-07-21 01:28:29.155907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.766 ms 00:18:44.407 [2024-07-21 01:28:29.155921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.156015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.156039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.407 [2024-07-21 01:28:29.156051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:44.407 [2024-07-21 01:28:29.156064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.172295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.172345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.407 [2024-07-21 01:28:29.172359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.207 ms 00:18:44.407 [2024-07-21 01:28:29.172373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.172440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.172455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.407 [2024-07-21 01:28:29.172466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:44.407 [2024-07-21 01:28:29.172479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.173296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.173335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.407 [2024-07-21 01:28:29.173348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:18:44.407 [2024-07-21 01:28:29.173361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.173464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.173486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.407 [2024-07-21 01:28:29.173497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:44.407 [2024-07-21 01:28:29.173513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.184912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.184955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.407 [2024-07-21 01:28:29.184967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.388 ms 00:18:44.407 [2024-07-21 01:28:29.184981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.193679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:44.407 [2024-07-21 01:28:29.198575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.198602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.407 [2024-07-21 01:28:29.198618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.538 ms 00:18:44.407 [2024-07-21 01:28:29.198628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.295056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.295115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:44.407 [2024-07-21 01:28:29.295140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.547 ms 00:18:44.407 [2024-07-21 01:28:29.295161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.295353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.295374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.407 [2024-07-21 01:28:29.295389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:44.407 [2024-07-21 01:28:29.295399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.299449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.299483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:44.407 [2024-07-21 01:28:29.299499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:18:44.407 [2024-07-21 01:28:29.299513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.407 [2024-07-21 01:28:29.302530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.407 [2024-07-21 01:28:29.302562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:44.408 [2024-07-21 01:28:29.302579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:18:44.408 [2024-07-21 01:28:29.302589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.302896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.302923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.408 [2024-07-21 01:28:29.302945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:18:44.408 [2024-07-21 01:28:29.302955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.351027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.351062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.408 [2024-07-21 01:28:29.351079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.105 ms 00:18:44.408 [2024-07-21 01:28:29.351092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.356485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.356518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.408 [2024-07-21 01:28:29.356534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.349 ms 00:18:44.408 [2024-07-21 01:28:29.356544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.359699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.359730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:44.408 [2024-07-21 01:28:29.359745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:18:44.408 [2024-07-21 01:28:29.359754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.363430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.363463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.408 [2024-07-21 01:28:29.363478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:18:44.408 [2024-07-21 01:28:29.363488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.363533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.363544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.408 [2024-07-21 01:28:29.363558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:44.408 [2024-07-21 01:28:29.363567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.363640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.408 [2024-07-21 01:28:29.363651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.408 [2024-07-21 01:28:29.363665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:44.408 [2024-07-21 01:28:29.363674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.408 [2024-07-21 01:28:29.365018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4123.940 ms, result 0 00:18:44.408 { 00:18:44.408 "name": "ftl0", 00:18:44.408 "uuid": "ce280a0b-3e48-42ae-87a9-3fe790281080" 00:18:44.408 } 00:18:44.408 01:28:29 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:44.408 01:28:29 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:44.408 01:28:29 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:44.408 01:28:29 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:44.668 [2024-07-21 01:28:29.734373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.734581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.668 [2024-07-21 01:28:29.734602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:44.668 [2024-07-21 01:28:29.734622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.734653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.668 [2024-07-21 01:28:29.735789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.735812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.668 [2024-07-21 01:28:29.735839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:18:44.668 [2024-07-21 01:28:29.735850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.736051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.736063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.668 [2024-07-21 01:28:29.736077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:18:44.668 [2024-07-21 01:28:29.736088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.738575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.738598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.668 [2024-07-21 01:28:29.738620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:18:44.668 [2024-07-21 01:28:29.738630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.743307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.743336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.668 [2024-07-21 01:28:29.743350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:18:44.668 [2024-07-21 01:28:29.743359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.744930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.744964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.668 [2024-07-21 01:28:29.744983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:18:44.668 [2024-07-21 01:28:29.744992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.750730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.750765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.668 [2024-07-21 01:28:29.750781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.708 ms 00:18:44.668 [2024-07-21 01:28:29.750790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.750909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.750923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.668 [2024-07-21 01:28:29.750936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:44.668 [2024-07-21 01:28:29.750949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.753064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.753097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:44.668 [2024-07-21 01:28:29.753111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:18:44.668 [2024-07-21 01:28:29.753121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.754692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.754725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:44.668 [2024-07-21 01:28:29.754743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:18:44.668 [2024-07-21 01:28:29.754752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.668 [2024-07-21 01:28:29.756095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.668 [2024-07-21 01:28:29.756125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.669 [2024-07-21 01:28:29.756139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:18:44.669 [2024-07-21 01:28:29.756147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.669 [2024-07-21 01:28:29.757308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.669 [2024-07-21 01:28:29.757339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.669 [2024-07-21 01:28:29.757353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:18:44.669 [2024-07-21 01:28:29.757363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.669 [2024-07-21 01:28:29.757396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.669 [2024-07-21 01:28:29.757413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.757991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.669 [2024-07-21 01:28:29.758514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.670 [2024-07-21 01:28:29.758685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.670 [2024-07-21 01:28:29.758701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ce280a0b-3e48-42ae-87a9-3fe790281080 00:18:44.670 [2024-07-21 01:28:29.758711] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.670 [2024-07-21 01:28:29.758724] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.670 [2024-07-21 01:28:29.758733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.670 [2024-07-21 01:28:29.758754] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.670 [2024-07-21 01:28:29.758764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.670 [2024-07-21 01:28:29.758776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.670 [2024-07-21 01:28:29.758789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.670 [2024-07-21 01:28:29.758801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.670 [2024-07-21 01:28:29.758809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.670 [2024-07-21 01:28:29.758821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.670 [2024-07-21 01:28:29.758831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.670 [2024-07-21 01:28:29.758854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:18:44.670 [2024-07-21 01:28:29.758864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.761282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.670 [2024-07-21 01:28:29.761304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.670 [2024-07-21 01:28:29.761322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.400 ms 00:18:44.670 [2024-07-21 01:28:29.761331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.761481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.670 [2024-07-21 01:28:29.761493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.670 [2024-07-21 01:28:29.761506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:44.670 [2024-07-21 01:28:29.761517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.771545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.771573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.670 [2024-07-21 01:28:29.771587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.771603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.771656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.771668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.670 [2024-07-21 01:28:29.771681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.771690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.771760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.771773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.670 [2024-07-21 01:28:29.771789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.771807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.771852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.771864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.670 [2024-07-21 01:28:29.771876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.771886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.789384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.789417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.670 [2024-07-21 01:28:29.789433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.789445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.801808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.801854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.670 [2024-07-21 01:28:29.801870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.801880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.801957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.801970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.670 [2024-07-21 01:28:29.801987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.801998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.802068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.670 [2024-07-21 01:28:29.802090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.802100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.802206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.670 [2024-07-21 01:28:29.802219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.802237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.802290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.670 [2024-07-21 01:28:29.802306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.802315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.802381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.670 [2024-07-21 01:28:29.802397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.802407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.670 [2024-07-21 01:28:29.802476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.670 [2024-07-21 01:28:29.802497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.670 [2024-07-21 01:28:29.802506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.670 [2024-07-21 01:28:29.802657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.339 ms, result 0 00:18:44.670 true 00:18:44.670 01:28:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90091 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90091 ']' 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90091 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90091 00:18:44.670 killing process with pid 90091 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90091' 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 90091 00:18:44.670 01:28:29 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 90091 00:18:47.955 01:28:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:52.142 262144+0 records in 00:18:52.142 262144+0 records out 00:18:52.142 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.78512 s, 284 MB/s 00:18:52.142 01:28:36 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:53.519 01:28:38 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:53.519 [2024-07-21 01:28:38.561074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:53.519 [2024-07-21 01:28:38.561178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90300 ] 00:18:53.519 [2024-07-21 01:28:38.727929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.519 [2024-07-21 01:28:38.769333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.779 [2024-07-21 01:28:38.871291] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.779 [2024-07-21 01:28:38.871383] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:53.779 [2024-07-21 01:28:39.022206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.779 [2024-07-21 01:28:39.022265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.779 [2024-07-21 01:28:39.022296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:53.779 [2024-07-21 01:28:39.022306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.779 [2024-07-21 01:28:39.022368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.779 [2024-07-21 01:28:39.022381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.779 [2024-07-21 01:28:39.022391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:53.779 [2024-07-21 01:28:39.022406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.779 [2024-07-21 01:28:39.022426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.779 [2024-07-21 01:28:39.022688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.780 [2024-07-21 01:28:39.022722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.022736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.780 [2024-07-21 01:28:39.022747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:53.780 [2024-07-21 01:28:39.022756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.024176] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:53.780 [2024-07-21 01:28:39.026745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.026782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:53.780 [2024-07-21 01:28:39.026815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:18:53.780 [2024-07-21 01:28:39.026825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.026892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.026905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:53.780 [2024-07-21 01:28:39.026917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:53.780 [2024-07-21 01:28:39.026927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.033702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.033736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.780 [2024-07-21 01:28:39.033748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:18:53.780 [2024-07-21 01:28:39.033757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.033919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.033937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.780 [2024-07-21 01:28:39.033948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:18:53.780 [2024-07-21 01:28:39.033958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.034019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.034034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.780 [2024-07-21 01:28:39.034049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:53.780 [2024-07-21 01:28:39.034060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.034086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.780 [2024-07-21 01:28:39.035704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.035733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.780 [2024-07-21 01:28:39.035744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:18:53.780 [2024-07-21 01:28:39.035754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.035785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.035803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.780 [2024-07-21 01:28:39.035838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:53.780 [2024-07-21 01:28:39.035860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.035890] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:53.780 [2024-07-21 01:28:39.035913] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:53.780 [2024-07-21 01:28:39.035952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:53.780 [2024-07-21 01:28:39.035969] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:53.780 [2024-07-21 01:28:39.036051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.780 [2024-07-21 01:28:39.036073] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.780 [2024-07-21 01:28:39.036089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:53.780 [2024-07-21 01:28:39.036102] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036114] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036124] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:53.780 [2024-07-21 01:28:39.036135] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.780 [2024-07-21 01:28:39.036144] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.780 [2024-07-21 01:28:39.036154] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.780 [2024-07-21 01:28:39.036165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.036175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.780 [2024-07-21 01:28:39.036185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:53.780 [2024-07-21 01:28:39.036197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.036267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.780 [2024-07-21 01:28:39.036278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.780 [2024-07-21 01:28:39.036288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:53.780 [2024-07-21 01:28:39.036306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.780 [2024-07-21 01:28:39.036388] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.780 [2024-07-21 01:28:39.036401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.780 [2024-07-21 01:28:39.036412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.780 [2024-07-21 01:28:39.036445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.780 [2024-07-21 01:28:39.036473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.780 [2024-07-21 01:28:39.036492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.780 [2024-07-21 01:28:39.036502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:53.780 [2024-07-21 01:28:39.036519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.780 [2024-07-21 01:28:39.036528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.780 [2024-07-21 01:28:39.036537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:53.780 [2024-07-21 01:28:39.036545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.780 [2024-07-21 01:28:39.036567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.780 [2024-07-21 01:28:39.036594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.780 [2024-07-21 01:28:39.036621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.780 [2024-07-21 01:28:39.036657] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.780 [2024-07-21 01:28:39.036684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.780 [2024-07-21 01:28:39.036717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.780 [2024-07-21 01:28:39.036735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.780 [2024-07-21 01:28:39.036744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:53.780 [2024-07-21 01:28:39.036753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.780 [2024-07-21 01:28:39.036762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.780 [2024-07-21 01:28:39.036771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:53.780 [2024-07-21 01:28:39.036781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.780 [2024-07-21 01:28:39.036800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:53.780 [2024-07-21 01:28:39.036809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036817] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.780 [2024-07-21 01:28:39.036844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.780 [2024-07-21 01:28:39.036861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.780 [2024-07-21 01:28:39.036880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.780 [2024-07-21 01:28:39.036893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.780 [2024-07-21 01:28:39.036902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.780 [2024-07-21 01:28:39.036911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.780 [2024-07-21 01:28:39.036921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.780 [2024-07-21 01:28:39.036930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.780 [2024-07-21 01:28:39.036940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.780 [2024-07-21 01:28:39.036953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.036964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:53.781 [2024-07-21 01:28:39.036974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:53.781 [2024-07-21 01:28:39.036984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:53.781 [2024-07-21 01:28:39.036994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:53.781 [2024-07-21 01:28:39.037004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:53.781 [2024-07-21 01:28:39.037014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:53.781 [2024-07-21 01:28:39.037025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:53.781 [2024-07-21 01:28:39.037034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:53.781 [2024-07-21 01:28:39.037044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:53.781 [2024-07-21 01:28:39.037057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:53.781 [2024-07-21 01:28:39.037106] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.781 [2024-07-21 01:28:39.037117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.781 [2024-07-21 01:28:39.037139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.781 [2024-07-21 01:28:39.037149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.781 [2024-07-21 01:28:39.037159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.781 [2024-07-21 01:28:39.037169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.037179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.781 [2024-07-21 01:28:39.037191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:18:53.781 [2024-07-21 01:28:39.037207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.060404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.060447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.781 [2024-07-21 01:28:39.060465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:18:53.781 [2024-07-21 01:28:39.060478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.060573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.060587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:53.781 [2024-07-21 01:28:39.060601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:53.781 [2024-07-21 01:28:39.060618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.071213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.071259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.781 [2024-07-21 01:28:39.071272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.523 ms 00:18:53.781 [2024-07-21 01:28:39.071284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.071319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.071330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.781 [2024-07-21 01:28:39.071341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:53.781 [2024-07-21 01:28:39.071363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.071818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.071869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.781 [2024-07-21 01:28:39.071882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:18:53.781 [2024-07-21 01:28:39.071901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.072029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.072052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.781 [2024-07-21 01:28:39.072063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:53.781 [2024-07-21 01:28:39.072075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.078013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.078045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.781 [2024-07-21 01:28:39.078057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.922 ms 00:18:53.781 [2024-07-21 01:28:39.078067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.781 [2024-07-21 01:28:39.080591] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:53.781 [2024-07-21 01:28:39.080627] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:53.781 [2024-07-21 01:28:39.080649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.781 [2024-07-21 01:28:39.080660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:53.781 [2024-07-21 01:28:39.080670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:18:53.781 [2024-07-21 01:28:39.080680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.093228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.093274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.041 [2024-07-21 01:28:39.093289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:18:54.041 [2024-07-21 01:28:39.093299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.094948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.094980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.041 [2024-07-21 01:28:39.094991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:18:54.041 [2024-07-21 01:28:39.095001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.096429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.096460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.041 [2024-07-21 01:28:39.096471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:18:54.041 [2024-07-21 01:28:39.096481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.096760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.096778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.041 [2024-07-21 01:28:39.096789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:18:54.041 [2024-07-21 01:28:39.096799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.117310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.117368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.041 [2024-07-21 01:28:39.117384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.520 ms 00:18:54.041 [2024-07-21 01:28:39.117394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.123517] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:54.041 [2024-07-21 01:28:39.125962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.125996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.041 [2024-07-21 01:28:39.126019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:18:54.041 [2024-07-21 01:28:39.126028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.126075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.126094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.041 [2024-07-21 01:28:39.126104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:54.041 [2024-07-21 01:28:39.126114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.126189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.126201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.041 [2024-07-21 01:28:39.126218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:54.041 [2024-07-21 01:28:39.126234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.126253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.126264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.041 [2024-07-21 01:28:39.126274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.041 [2024-07-21 01:28:39.126290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.126324] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.041 [2024-07-21 01:28:39.126336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.126346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.041 [2024-07-21 01:28:39.126357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:54.041 [2024-07-21 01:28:39.126369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.130069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.130202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.041 [2024-07-21 01:28:39.130371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:18:54.041 [2024-07-21 01:28:39.130408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.130532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.041 [2024-07-21 01:28:39.130573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.041 [2024-07-21 01:28:39.130603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:54.041 [2024-07-21 01:28:39.130685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.041 [2024-07-21 01:28:39.132066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.617 ms, result 0 00:19:36.338  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (25 MBps) Copying: 122/1024 [MB] (24 MBps) Copying: 148/1024 [MB] (25 MBps) Copying: 173/1024 [MB] (25 MBps) Copying: 198/1024 [MB] (25 MBps) Copying: 223/1024 [MB] (24 MBps) Copying: 248/1024 [MB] (25 MBps) Copying: 273/1024 [MB] (24 MBps) Copying: 298/1024 [MB] (25 MBps) Copying: 324/1024 [MB] (25 MBps) Copying: 347/1024 [MB] (23 MBps) Copying: 371/1024 [MB] (23 MBps) Copying: 395/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (24 MBps) Copying: 443/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (24 MBps) Copying: 492/1024 [MB] (24 MBps) Copying: 517/1024 [MB] (24 MBps) Copying: 541/1024 [MB] (24 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 590/1024 [MB] (24 MBps) Copying: 615/1024 [MB] (24 MBps) Copying: 640/1024 [MB] (25 MBps) Copying: 665/1024 [MB] (25 MBps) Copying: 688/1024 [MB] (23 MBps) Copying: 713/1024 [MB] (24 MBps) Copying: 737/1024 [MB] (23 MBps) Copying: 761/1024 [MB] (24 MBps) Copying: 785/1024 [MB] (23 MBps) Copying: 808/1024 [MB] (23 MBps) Copying: 831/1024 [MB] (23 MBps) Copying: 855/1024 [MB] (23 MBps) Copying: 878/1024 [MB] (23 MBps) Copying: 901/1024 [MB] (23 MBps) Copying: 924/1024 [MB] (22 MBps) Copying: 946/1024 [MB] (22 MBps) Copying: 968/1024 [MB] (22 MBps) Copying: 991/1024 [MB] (22 MBps) Copying: 1013/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-21 01:29:21.533261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.533429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.338 [2024-07-21 01:29:21.533523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:36.338 [2024-07-21 01:29:21.533566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.533617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.338 [2024-07-21 01:29:21.535024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.535145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.338 [2024-07-21 01:29:21.535232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:19:36.338 [2024-07-21 01:29:21.535265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.537372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.537517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.338 [2024-07-21 01:29:21.537538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:19:36.338 [2024-07-21 01:29:21.537556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.557434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.557578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.338 [2024-07-21 01:29:21.557675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.876 ms 00:19:36.338 [2024-07-21 01:29:21.557709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.562624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.562747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.338 [2024-07-21 01:29:21.562765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:19:36.338 [2024-07-21 01:29:21.562801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.564182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.564212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.338 [2024-07-21 01:29:21.564224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:19:36.338 [2024-07-21 01:29:21.564233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.569535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.569565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.338 [2024-07-21 01:29:21.569577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:19:36.338 [2024-07-21 01:29:21.569586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.338 [2024-07-21 01:29:21.569693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.338 [2024-07-21 01:29:21.569704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.339 [2024-07-21 01:29:21.569723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:36.339 [2024-07-21 01:29:21.569748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.339 [2024-07-21 01:29:21.572126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.339 [2024-07-21 01:29:21.572153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:36.339 [2024-07-21 01:29:21.572163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:19:36.339 [2024-07-21 01:29:21.572172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.339 [2024-07-21 01:29:21.573882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.339 [2024-07-21 01:29:21.573905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:36.339 [2024-07-21 01:29:21.573915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:19:36.339 [2024-07-21 01:29:21.573924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.339 [2024-07-21 01:29:21.575133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.339 [2024-07-21 01:29:21.575159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.339 [2024-07-21 01:29:21.575169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:19:36.339 [2024-07-21 01:29:21.575178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.339 [2024-07-21 01:29:21.576418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.339 [2024-07-21 01:29:21.576444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.339 [2024-07-21 01:29:21.576454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:19:36.339 [2024-07-21 01:29:21.576463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.339 [2024-07-21 01:29:21.576488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.339 [2024-07-21 01:29:21.576504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.576992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.339 [2024-07-21 01:29:21.577083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.340 [2024-07-21 01:29:21.577553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.340 [2024-07-21 01:29:21.577572] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ce280a0b-3e48-42ae-87a9-3fe790281080 00:19:36.340 [2024-07-21 01:29:21.577590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.340 [2024-07-21 01:29:21.577599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.340 [2024-07-21 01:29:21.577608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.340 [2024-07-21 01:29:21.577618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.340 [2024-07-21 01:29:21.577626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.340 [2024-07-21 01:29:21.577636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.340 [2024-07-21 01:29:21.577645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.340 [2024-07-21 01:29:21.577653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.340 [2024-07-21 01:29:21.577661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.340 [2024-07-21 01:29:21.577670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.340 [2024-07-21 01:29:21.577683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.340 [2024-07-21 01:29:21.577692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:19:36.340 [2024-07-21 01:29:21.577701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.340 [2024-07-21 01:29:21.580356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.340 [2024-07-21 01:29:21.580377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.340 [2024-07-21 01:29:21.580388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:19:36.340 [2024-07-21 01:29:21.580397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.340 [2024-07-21 01:29:21.580556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.340 [2024-07-21 01:29:21.580566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.340 [2024-07-21 01:29:21.580576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:36.340 [2024-07-21 01:29:21.580585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.340 [2024-07-21 01:29:21.589432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.340 [2024-07-21 01:29:21.589459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.340 [2024-07-21 01:29:21.589471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.340 [2024-07-21 01:29:21.589487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.340 [2024-07-21 01:29:21.589540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.340 [2024-07-21 01:29:21.589550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.340 [2024-07-21 01:29:21.589560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.340 [2024-07-21 01:29:21.589570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.340 [2024-07-21 01:29:21.589611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.340 [2024-07-21 01:29:21.589623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.340 [2024-07-21 01:29:21.589633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.340 [2024-07-21 01:29:21.589642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.589658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.589672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.341 [2024-07-21 01:29:21.589689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.589699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.609089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.609121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.341 [2024-07-21 01:29:21.609133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.609143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.341 [2024-07-21 01:29:21.622181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.341 [2024-07-21 01:29:21.622269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.341 [2024-07-21 01:29:21.622344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.341 [2024-07-21 01:29:21.622621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.341 [2024-07-21 01:29:21.622696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.341 [2024-07-21 01:29:21.622777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.622852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.341 [2024-07-21 01:29:21.622864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.341 [2024-07-21 01:29:21.622878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.341 [2024-07-21 01:29:21.622887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.341 [2024-07-21 01:29:21.623033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.880 ms, result 0 00:19:37.277 00:19:37.277 00:19:37.277 01:29:22 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:37.277 [2024-07-21 01:29:22.452154] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:37.277 [2024-07-21 01:29:22.452279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90755 ] 00:19:37.535 [2024-07-21 01:29:22.618011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.535 [2024-07-21 01:29:22.679469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.535 [2024-07-21 01:29:22.823812] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.535 [2024-07-21 01:29:22.823908] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.795 [2024-07-21 01:29:22.977751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.977803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.795 [2024-07-21 01:29:22.977819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.795 [2024-07-21 01:29:22.977846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.977902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.977914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.795 [2024-07-21 01:29:22.977925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:37.795 [2024-07-21 01:29:22.977938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.977964] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.795 [2024-07-21 01:29:22.978243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.795 [2024-07-21 01:29:22.978262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.978276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.795 [2024-07-21 01:29:22.978287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:37.795 [2024-07-21 01:29:22.978296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.980714] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:37.795 [2024-07-21 01:29:22.984132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.984163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:37.795 [2024-07-21 01:29:22.984180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:19:37.795 [2024-07-21 01:29:22.984190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.984266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.984278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:37.795 [2024-07-21 01:29:22.984289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:37.795 [2024-07-21 01:29:22.984299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.996116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.996151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.795 [2024-07-21 01:29:22.996171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.799 ms 00:19:37.795 [2024-07-21 01:29:22.996180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.996294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.996308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.795 [2024-07-21 01:29:22.996319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:37.795 [2024-07-21 01:29:22.996329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.996383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.996398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.795 [2024-07-21 01:29:22.996413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.795 [2024-07-21 01:29:22.996422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.996448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.795 [2024-07-21 01:29:22.999059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.999084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.795 [2024-07-21 01:29:22.999095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:19:37.795 [2024-07-21 01:29:22.999113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.999146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.999157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.795 [2024-07-21 01:29:22.999170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.795 [2024-07-21 01:29:22.999180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.999209] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.795 [2024-07-21 01:29:22.999240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:37.795 [2024-07-21 01:29:22.999278] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.795 [2024-07-21 01:29:22.999305] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:37.795 [2024-07-21 01:29:22.999403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.795 [2024-07-21 01:29:22.999424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.795 [2024-07-21 01:29:22.999437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:37.795 [2024-07-21 01:29:22.999450] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999462] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999473] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:37.795 [2024-07-21 01:29:22.999483] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.795 [2024-07-21 01:29:22.999492] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.795 [2024-07-21 01:29:22.999501] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.795 [2024-07-21 01:29:22.999511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.999522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.795 [2024-07-21 01:29:22.999532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:37.795 [2024-07-21 01:29:22.999544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.999606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.795 [2024-07-21 01:29:22.999628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.795 [2024-07-21 01:29:22.999638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:37.795 [2024-07-21 01:29:22.999647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.795 [2024-07-21 01:29:22.999728] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.795 [2024-07-21 01:29:22.999740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.795 [2024-07-21 01:29:22.999751] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.795 [2024-07-21 01:29:22.999784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.795 [2024-07-21 01:29:22.999811] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.795 [2024-07-21 01:29:22.999843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.795 [2024-07-21 01:29:22.999852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:37.795 [2024-07-21 01:29:22.999870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.795 [2024-07-21 01:29:22.999880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.795 [2024-07-21 01:29:22.999890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:37.795 [2024-07-21 01:29:22.999898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.795 [2024-07-21 01:29:22.999922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.795 [2024-07-21 01:29:22.999950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.795 [2024-07-21 01:29:22.999976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:37.795 [2024-07-21 01:29:22.999986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.795 [2024-07-21 01:29:22.999995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.795 [2024-07-21 01:29:23.000004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:37.795 [2024-07-21 01:29:23.000013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.795 [2024-07-21 01:29:23.000022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.795 [2024-07-21 01:29:23.000031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:37.795 [2024-07-21 01:29:23.000039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.795 [2024-07-21 01:29:23.000048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.795 [2024-07-21 01:29:23.000063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:37.796 [2024-07-21 01:29:23.000072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.796 [2024-07-21 01:29:23.000080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.796 [2024-07-21 01:29:23.000089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:37.796 [2024-07-21 01:29:23.000097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.796 [2024-07-21 01:29:23.000105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.796 [2024-07-21 01:29:23.000113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:37.796 [2024-07-21 01:29:23.000122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.796 [2024-07-21 01:29:23.000130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.796 [2024-07-21 01:29:23.000140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:37.796 [2024-07-21 01:29:23.000149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.796 [2024-07-21 01:29:23.000158] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.796 [2024-07-21 01:29:23.000168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.796 [2024-07-21 01:29:23.000177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.796 [2024-07-21 01:29:23.000186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.796 [2024-07-21 01:29:23.000195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.796 [2024-07-21 01:29:23.000207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.796 [2024-07-21 01:29:23.000216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.796 [2024-07-21 01:29:23.000224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.796 [2024-07-21 01:29:23.000233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.796 [2024-07-21 01:29:23.000242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.796 [2024-07-21 01:29:23.000252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.796 [2024-07-21 01:29:23.000263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:37.796 [2024-07-21 01:29:23.000284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:37.796 [2024-07-21 01:29:23.000293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:37.796 [2024-07-21 01:29:23.000303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:37.796 [2024-07-21 01:29:23.000313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:37.796 [2024-07-21 01:29:23.000322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:37.796 [2024-07-21 01:29:23.000331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:37.796 [2024-07-21 01:29:23.000341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:37.796 [2024-07-21 01:29:23.000352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:37.796 [2024-07-21 01:29:23.000365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:37.796 [2024-07-21 01:29:23.000414] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.796 [2024-07-21 01:29:23.000431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.796 [2024-07-21 01:29:23.000458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.796 [2024-07-21 01:29:23.000468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.796 [2024-07-21 01:29:23.000478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.796 [2024-07-21 01:29:23.000487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.000505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.796 [2024-07-21 01:29:23.000519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:19:37.796 [2024-07-21 01:29:23.000528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.031311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.031393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.796 [2024-07-21 01:29:23.031434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.766 ms 00:19:37.796 [2024-07-21 01:29:23.031468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.031700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.031736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.796 [2024-07-21 01:29:23.031808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:19:37.796 [2024-07-21 01:29:23.031892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.051760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.051806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.796 [2024-07-21 01:29:23.051850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.742 ms 00:19:37.796 [2024-07-21 01:29:23.051868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.051924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.051942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.796 [2024-07-21 01:29:23.051958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.796 [2024-07-21 01:29:23.051980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.052808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.052857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.796 [2024-07-21 01:29:23.052875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:19:37.796 [2024-07-21 01:29:23.052891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.053074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.053106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.796 [2024-07-21 01:29:23.053123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:37.796 [2024-07-21 01:29:23.053138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.063049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.063089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.796 [2024-07-21 01:29:23.063111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.891 ms 00:19:37.796 [2024-07-21 01:29:23.063130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.066878] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:37.796 [2024-07-21 01:29:23.066911] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:37.796 [2024-07-21 01:29:23.066929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.066940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:37.796 [2024-07-21 01:29:23.066951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:19:37.796 [2024-07-21 01:29:23.066960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.079652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.079686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:37.796 [2024-07-21 01:29:23.079700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.670 ms 00:19:37.796 [2024-07-21 01:29:23.079710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.081599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.081631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:37.796 [2024-07-21 01:29:23.081643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:19:37.796 [2024-07-21 01:29:23.081652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.083196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.083225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:37.796 [2024-07-21 01:29:23.083236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:19:37.796 [2024-07-21 01:29:23.083244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.796 [2024-07-21 01:29:23.083502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.796 [2024-07-21 01:29:23.083526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.796 [2024-07-21 01:29:23.083537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:37.796 [2024-07-21 01:29:23.083551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.112947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.113002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.055 [2024-07-21 01:29:23.113019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.415 ms 00:19:38.055 [2024-07-21 01:29:23.113030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.119040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:38.055 [2024-07-21 01:29:23.122118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.122147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.055 [2024-07-21 01:29:23.122160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.060 ms 00:19:38.055 [2024-07-21 01:29:23.122170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.122232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.122245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.055 [2024-07-21 01:29:23.122256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:38.055 [2024-07-21 01:29:23.122269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.122332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.122351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.055 [2024-07-21 01:29:23.122362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:38.055 [2024-07-21 01:29:23.122371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.122393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.122403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.055 [2024-07-21 01:29:23.122413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.055 [2024-07-21 01:29:23.122423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.122457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.055 [2024-07-21 01:29:23.122470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.122480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.055 [2024-07-21 01:29:23.122494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:38.055 [2024-07-21 01:29:23.122503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.126966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.126999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.055 [2024-07-21 01:29:23.127021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.450 ms 00:19:38.055 [2024-07-21 01:29:23.127030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.127104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.055 [2024-07-21 01:29:23.127116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.055 [2024-07-21 01:29:23.127127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:38.055 [2024-07-21 01:29:23.127148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.055 [2024-07-21 01:29:23.128548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.572 ms, result 0 00:20:20.404  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (24 MBps) Copying: 188/1024 [MB] (24 MBps) Copying: 213/1024 [MB] (24 MBps) Copying: 237/1024 [MB] (24 MBps) Copying: 262/1024 [MB] (24 MBps) Copying: 285/1024 [MB] (23 MBps) Copying: 309/1024 [MB] (24 MBps) Copying: 333/1024 [MB] (23 MBps) Copying: 357/1024 [MB] (24 MBps) Copying: 380/1024 [MB] (23 MBps) Copying: 404/1024 [MB] (23 MBps) Copying: 427/1024 [MB] (23 MBps) Copying: 451/1024 [MB] (23 MBps) Copying: 476/1024 [MB] (25 MBps) Copying: 501/1024 [MB] (24 MBps) Copying: 525/1024 [MB] (24 MBps) Copying: 551/1024 [MB] (25 MBps) Copying: 575/1024 [MB] (24 MBps) Copying: 601/1024 [MB] (25 MBps) Copying: 626/1024 [MB] (25 MBps) Copying: 651/1024 [MB] (24 MBps) Copying: 675/1024 [MB] (24 MBps) Copying: 700/1024 [MB] (24 MBps) Copying: 725/1024 [MB] (25 MBps) Copying: 750/1024 [MB] (25 MBps) Copying: 775/1024 [MB] (24 MBps) Copying: 799/1024 [MB] (24 MBps) Copying: 823/1024 [MB] (24 MBps) Copying: 848/1024 [MB] (25 MBps) Copying: 874/1024 [MB] (26 MBps) Copying: 900/1024 [MB] (25 MBps) Copying: 925/1024 [MB] (25 MBps) Copying: 951/1024 [MB] (25 MBps) Copying: 978/1024 [MB] (26 MBps) Copying: 1004/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-21 01:30:05.712996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.404 [2024-07-21 01:30:05.713079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.404 [2024-07-21 01:30:05.713103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:20.404 [2024-07-21 01:30:05.713116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.404 [2024-07-21 01:30:05.713145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.404 [2024-07-21 01:30:05.713918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.404 [2024-07-21 01:30:05.713941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.404 [2024-07-21 01:30:05.713953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:20:20.404 [2024-07-21 01:30:05.713982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.404 [2024-07-21 01:30:05.714181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.404 [2024-07-21 01:30:05.714194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.404 [2024-07-21 01:30:05.714210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:20:20.404 [2024-07-21 01:30:05.714220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.716908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.716933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.664 [2024-07-21 01:30:05.716945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:20:20.664 [2024-07-21 01:30:05.716955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.722359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.722397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.664 [2024-07-21 01:30:05.722409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:20:20.664 [2024-07-21 01:30:05.722425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.724524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.724562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.664 [2024-07-21 01:30:05.724575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:20:20.664 [2024-07-21 01:30:05.724585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.730144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.730187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.664 [2024-07-21 01:30:05.730200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.534 ms 00:20:20.664 [2024-07-21 01:30:05.730210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.730314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.730328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.664 [2024-07-21 01:30:05.730338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:20.664 [2024-07-21 01:30:05.730360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.733508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.733544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:20.664 [2024-07-21 01:30:05.733556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:20:20.664 [2024-07-21 01:30:05.733566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.735865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.735898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:20.664 [2024-07-21 01:30:05.735910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:20:20.664 [2024-07-21 01:30:05.735919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.737391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.737429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.664 [2024-07-21 01:30:05.737440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:20:20.664 [2024-07-21 01:30:05.737463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.738731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.664 [2024-07-21 01:30:05.738768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.664 [2024-07-21 01:30:05.738780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:20:20.664 [2024-07-21 01:30:05.738789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.664 [2024-07-21 01:30:05.738815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.664 [2024-07-21 01:30:05.738851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.664 [2024-07-21 01:30:05.738940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.738950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.738960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.738971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.738982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.738992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.665 [2024-07-21 01:30:05.739999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.665 [2024-07-21 01:30:05.740009] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ce280a0b-3e48-42ae-87a9-3fe790281080 00:20:20.665 [2024-07-21 01:30:05.740031] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.665 [2024-07-21 01:30:05.740042] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.665 [2024-07-21 01:30:05.740052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.665 [2024-07-21 01:30:05.740063] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.666 [2024-07-21 01:30:05.740073] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.666 [2024-07-21 01:30:05.740084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.666 [2024-07-21 01:30:05.740098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.666 [2024-07-21 01:30:05.740108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.666 [2024-07-21 01:30:05.740117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.666 [2024-07-21 01:30:05.740128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.666 [2024-07-21 01:30:05.740139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.666 [2024-07-21 01:30:05.740150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:20:20.666 [2024-07-21 01:30:05.740161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.741955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.666 [2024-07-21 01:30:05.741978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.666 [2024-07-21 01:30:05.741989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:20:20.666 [2024-07-21 01:30:05.741999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.742114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.666 [2024-07-21 01:30:05.742127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.666 [2024-07-21 01:30:05.742137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:20.666 [2024-07-21 01:30:05.742156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.749229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.749258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.666 [2024-07-21 01:30:05.749276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.749290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.749350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.749362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.666 [2024-07-21 01:30:05.749373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.749382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.749438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.749450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.666 [2024-07-21 01:30:05.749460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.749471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.749491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.749502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.666 [2024-07-21 01:30:05.749512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.749521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.762275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.762325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.666 [2024-07-21 01:30:05.762339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.762350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.770604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.770647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.666 [2024-07-21 01:30:05.770667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.770678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.770726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.770737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.666 [2024-07-21 01:30:05.770748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.770758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.770784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.770798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.666 [2024-07-21 01:30:05.770808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.770818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.770905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.770924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.666 [2024-07-21 01:30:05.770935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.770945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.770978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.770989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.666 [2024-07-21 01:30:05.771003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.771013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.771052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.771063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.666 [2024-07-21 01:30:05.771074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.771084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.771126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.666 [2024-07-21 01:30:05.771140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.666 [2024-07-21 01:30:05.771150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.666 [2024-07-21 01:30:05.771166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.666 [2024-07-21 01:30:05.771296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.364 ms, result 0 00:20:20.925 00:20:20.925 00:20:20.925 01:30:06 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:22.842 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:22.842 01:30:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:22.842 [2024-07-21 01:30:07.849474] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:22.842 [2024-07-21 01:30:07.849593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91218 ] 00:20:22.842 [2024-07-21 01:30:08.016695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.842 [2024-07-21 01:30:08.058218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.101 [2024-07-21 01:30:08.160070] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.101 [2024-07-21 01:30:08.160145] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.101 [2024-07-21 01:30:08.310861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.310911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.101 [2024-07-21 01:30:08.310926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.101 [2024-07-21 01:30:08.310935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.311011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.311025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.101 [2024-07-21 01:30:08.311036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:23.101 [2024-07-21 01:30:08.311049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.311077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.101 [2024-07-21 01:30:08.311344] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.101 [2024-07-21 01:30:08.311363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.311376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.101 [2024-07-21 01:30:08.311387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:23.101 [2024-07-21 01:30:08.311396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.312943] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.101 [2024-07-21 01:30:08.315393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.315426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.101 [2024-07-21 01:30:08.315467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:20:23.101 [2024-07-21 01:30:08.315477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.315534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.315552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.101 [2024-07-21 01:30:08.315564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:23.101 [2024-07-21 01:30:08.315573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.322318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.322349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.101 [2024-07-21 01:30:08.322361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.697 ms 00:20:23.101 [2024-07-21 01:30:08.322387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.322473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.322486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.101 [2024-07-21 01:30:08.322504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:23.101 [2024-07-21 01:30:08.322520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.322577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.322593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.101 [2024-07-21 01:30:08.322605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:23.101 [2024-07-21 01:30:08.322615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.322641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.101 [2024-07-21 01:30:08.324265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.324292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.101 [2024-07-21 01:30:08.324303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:20:23.101 [2024-07-21 01:30:08.324322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.324354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.101 [2024-07-21 01:30:08.324366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.101 [2024-07-21 01:30:08.324379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:23.101 [2024-07-21 01:30:08.324388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.101 [2024-07-21 01:30:08.324411] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.101 [2024-07-21 01:30:08.324442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:23.101 [2024-07-21 01:30:08.324483] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.101 [2024-07-21 01:30:08.324506] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:23.101 [2024-07-21 01:30:08.324590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:23.102 [2024-07-21 01:30:08.324606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.102 [2024-07-21 01:30:08.324622] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:23.102 [2024-07-21 01:30:08.324635] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.102 [2024-07-21 01:30:08.324647] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.102 [2024-07-21 01:30:08.324665] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:23.102 [2024-07-21 01:30:08.324675] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.102 [2024-07-21 01:30:08.324685] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:23.102 [2024-07-21 01:30:08.324695] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:23.102 [2024-07-21 01:30:08.324705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.324715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.102 [2024-07-21 01:30:08.324725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:23.102 [2024-07-21 01:30:08.324737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.102 [2024-07-21 01:30:08.324805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.324816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.102 [2024-07-21 01:30:08.324836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:23.102 [2024-07-21 01:30:08.324846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.102 [2024-07-21 01:30:08.324938] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.102 [2024-07-21 01:30:08.324950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.102 [2024-07-21 01:30:08.324967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.102 [2024-07-21 01:30:08.324977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.324992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.102 [2024-07-21 01:30:08.325001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.102 [2024-07-21 01:30:08.325029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.102 [2024-07-21 01:30:08.325047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.102 [2024-07-21 01:30:08.325056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:23.102 [2024-07-21 01:30:08.325075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.102 [2024-07-21 01:30:08.325084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.102 [2024-07-21 01:30:08.325094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:23.102 [2024-07-21 01:30:08.325103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.102 [2024-07-21 01:30:08.325124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.102 [2024-07-21 01:30:08.325152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.102 [2024-07-21 01:30:08.325179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.102 [2024-07-21 01:30:08.325206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.102 [2024-07-21 01:30:08.325233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325242] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.102 [2024-07-21 01:30:08.325265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.102 [2024-07-21 01:30:08.325283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.102 [2024-07-21 01:30:08.325292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:23.102 [2024-07-21 01:30:08.325301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.102 [2024-07-21 01:30:08.325310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:23.102 [2024-07-21 01:30:08.325319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:23.102 [2024-07-21 01:30:08.325328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:23.102 [2024-07-21 01:30:08.325346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:23.102 [2024-07-21 01:30:08.325355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325363] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.102 [2024-07-21 01:30:08.325375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.102 [2024-07-21 01:30:08.325384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.102 [2024-07-21 01:30:08.325403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.102 [2024-07-21 01:30:08.325415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.102 [2024-07-21 01:30:08.325425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.102 [2024-07-21 01:30:08.325434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.102 [2024-07-21 01:30:08.325444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.102 [2024-07-21 01:30:08.325453] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.102 [2024-07-21 01:30:08.325463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.102 [2024-07-21 01:30:08.325475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:23.102 [2024-07-21 01:30:08.325496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:23.102 [2024-07-21 01:30:08.325507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:23.102 [2024-07-21 01:30:08.325517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:23.102 [2024-07-21 01:30:08.325527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:23.102 [2024-07-21 01:30:08.325537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:23.102 [2024-07-21 01:30:08.325547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:23.102 [2024-07-21 01:30:08.325557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:23.102 [2024-07-21 01:30:08.325567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:23.102 [2024-07-21 01:30:08.325579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:23.102 [2024-07-21 01:30:08.325630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.102 [2024-07-21 01:30:08.325648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.102 [2024-07-21 01:30:08.325670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.102 [2024-07-21 01:30:08.325680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.102 [2024-07-21 01:30:08.325690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.102 [2024-07-21 01:30:08.325700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.325711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.102 [2024-07-21 01:30:08.325724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:20:23.102 [2024-07-21 01:30:08.325733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.102 [2024-07-21 01:30:08.347376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.347428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.102 [2024-07-21 01:30:08.347454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.622 ms 00:20:23.102 [2024-07-21 01:30:08.347468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.102 [2024-07-21 01:30:08.347576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.347591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.102 [2024-07-21 01:30:08.347604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:23.102 [2024-07-21 01:30:08.347633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.102 [2024-07-21 01:30:08.358740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.102 [2024-07-21 01:30:08.358783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.103 [2024-07-21 01:30:08.358813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.061 ms 00:20:23.103 [2024-07-21 01:30:08.358824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.358880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.358893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.103 [2024-07-21 01:30:08.358907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:23.103 [2024-07-21 01:30:08.358917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.359369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.359390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.103 [2024-07-21 01:30:08.359401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:20:23.103 [2024-07-21 01:30:08.359411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.359527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.359543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.103 [2024-07-21 01:30:08.359553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:23.103 [2024-07-21 01:30:08.359566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.365478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.365520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.103 [2024-07-21 01:30:08.365540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.900 ms 00:20:23.103 [2024-07-21 01:30:08.365550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.368167] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:23.103 [2024-07-21 01:30:08.368204] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:23.103 [2024-07-21 01:30:08.368218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.368229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:23.103 [2024-07-21 01:30:08.368239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:20:23.103 [2024-07-21 01:30:08.368249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.380888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.380929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:23.103 [2024-07-21 01:30:08.380942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:20:23.103 [2024-07-21 01:30:08.380952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.382578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.382613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:23.103 [2024-07-21 01:30:08.382625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:20:23.103 [2024-07-21 01:30:08.382634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.384012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.384043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:23.103 [2024-07-21 01:30:08.384055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:20:23.103 [2024-07-21 01:30:08.384065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.103 [2024-07-21 01:30:08.384332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.103 [2024-07-21 01:30:08.384355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.103 [2024-07-21 01:30:08.384367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:23.103 [2024-07-21 01:30:08.384381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.415545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.415610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:23.361 [2024-07-21 01:30:08.415627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.194 ms 00:20:23.361 [2024-07-21 01:30:08.415653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.421801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:23.361 [2024-07-21 01:30:08.424410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.424441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.361 [2024-07-21 01:30:08.424477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.726 ms 00:20:23.361 [2024-07-21 01:30:08.424487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.424555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.424567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:23.361 [2024-07-21 01:30:08.424578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:23.361 [2024-07-21 01:30:08.424600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.424675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.424693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.361 [2024-07-21 01:30:08.424704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:23.361 [2024-07-21 01:30:08.424714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.424735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.424756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.361 [2024-07-21 01:30:08.424774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:23.361 [2024-07-21 01:30:08.424784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.424818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:23.361 [2024-07-21 01:30:08.424864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.424884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:23.361 [2024-07-21 01:30:08.424895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:23.361 [2024-07-21 01:30:08.424905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.428245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.428280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.361 [2024-07-21 01:30:08.428293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:20:23.361 [2024-07-21 01:30:08.428303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.428364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.361 [2024-07-21 01:30:08.428376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.361 [2024-07-21 01:30:08.428393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:23.361 [2024-07-21 01:30:08.428402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.361 [2024-07-21 01:30:08.429474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.426 ms, result 0 00:21:07.841  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 73/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (23 MBps) Copying: 121/1024 [MB] (23 MBps) Copying: 145/1024 [MB] (23 MBps) Copying: 168/1024 [MB] (23 MBps) Copying: 191/1024 [MB] (23 MBps) Copying: 214/1024 [MB] (22 MBps) Copying: 236/1024 [MB] (22 MBps) Copying: 259/1024 [MB] (22 MBps) Copying: 282/1024 [MB] (22 MBps) Copying: 304/1024 [MB] (22 MBps) Copying: 327/1024 [MB] (22 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 372/1024 [MB] (22 MBps) Copying: 396/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 443/1024 [MB] (23 MBps) Copying: 466/1024 [MB] (23 MBps) Copying: 490/1024 [MB] (23 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 536/1024 [MB] (22 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 582/1024 [MB] (23 MBps) Copying: 606/1024 [MB] (23 MBps) Copying: 630/1024 [MB] (23 MBps) Copying: 653/1024 [MB] (23 MBps) Copying: 677/1024 [MB] (23 MBps) Copying: 699/1024 [MB] (22 MBps) Copying: 722/1024 [MB] (22 MBps) Copying: 745/1024 [MB] (22 MBps) Copying: 768/1024 [MB] (23 MBps) Copying: 791/1024 [MB] (23 MBps) Copying: 813/1024 [MB] (22 MBps) Copying: 836/1024 [MB] (23 MBps) Copying: 860/1024 [MB] (23 MBps) Copying: 882/1024 [MB] (22 MBps) Copying: 905/1024 [MB] (22 MBps) Copying: 928/1024 [MB] (22 MBps) Copying: 951/1024 [MB] (23 MBps) Copying: 976/1024 [MB] (24 MBps) Copying: 999/1024 [MB] (23 MBps) Copying: 1021/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-21 01:30:52.915643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.915695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:07.841 [2024-07-21 01:30:52.915721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.841 [2024-07-21 01:30:52.915732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.915772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:07.841 [2024-07-21 01:30:52.916567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.916601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:07.841 [2024-07-21 01:30:52.916629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:21:07.841 [2024-07-21 01:30:52.916640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.927113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.927156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:07.841 [2024-07-21 01:30:52.927169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.040 ms 00:21:07.841 [2024-07-21 01:30:52.927180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.948971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.949011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:07.841 [2024-07-21 01:30:52.949031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:21:07.841 [2024-07-21 01:30:52.949042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.954074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.954106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:07.841 [2024-07-21 01:30:52.954118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:21:07.841 [2024-07-21 01:30:52.954140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.955934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.955967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:07.841 [2024-07-21 01:30:52.955979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:21:07.841 [2024-07-21 01:30:52.955989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:52.959662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:52.959702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:07.841 [2024-07-21 01:30:52.959720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:21:07.841 [2024-07-21 01:30:52.959730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.033995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:53.034040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:07.841 [2024-07-21 01:30:53.034054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.351 ms 00:21:07.841 [2024-07-21 01:30:53.034064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.036444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:53.036479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:07.841 [2024-07-21 01:30:53.036491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:21:07.841 [2024-07-21 01:30:53.036500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.037905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:53.037939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:07.841 [2024-07-21 01:30:53.037951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:21:07.841 [2024-07-21 01:30:53.037960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.039093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:53.039126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:07.841 [2024-07-21 01:30:53.039137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:21:07.841 [2024-07-21 01:30:53.039160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.040252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.841 [2024-07-21 01:30:53.040285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:07.841 [2024-07-21 01:30:53.040296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:21:07.841 [2024-07-21 01:30:53.040305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.841 [2024-07-21 01:30:53.040331] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:07.841 [2024-07-21 01:30:53.040348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 66304 / 261120 wr_cnt: 1 state: open 00:21:07.841 [2024-07-21 01:30:53.040361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:07.841 [2024-07-21 01:30:53.040510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.040996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:07.842 [2024-07-21 01:30:53.041163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:07.843 [2024-07-21 01:30:53.041418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:07.843 [2024-07-21 01:30:53.041429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ce280a0b-3e48-42ae-87a9-3fe790281080 00:21:07.843 [2024-07-21 01:30:53.041445] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 66304 00:21:07.843 [2024-07-21 01:30:53.041454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 67264 00:21:07.843 [2024-07-21 01:30:53.041472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 66304 00:21:07.843 [2024-07-21 01:30:53.041483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:21:07.843 [2024-07-21 01:30:53.041499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:07.843 [2024-07-21 01:30:53.041509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:07.843 [2024-07-21 01:30:53.041519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:07.843 [2024-07-21 01:30:53.041528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:07.843 [2024-07-21 01:30:53.041537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:07.843 [2024-07-21 01:30:53.041546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.843 [2024-07-21 01:30:53.041556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:07.843 [2024-07-21 01:30:53.041566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:21:07.843 [2024-07-21 01:30:53.041576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.043268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.843 [2024-07-21 01:30:53.043290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:07.843 [2024-07-21 01:30:53.043302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:21:07.843 [2024-07-21 01:30:53.043312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.043426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.843 [2024-07-21 01:30:53.043443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:07.843 [2024-07-21 01:30:53.043457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:07.843 [2024-07-21 01:30:53.043474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.049430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.049458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.843 [2024-07-21 01:30:53.049470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.843 [2024-07-21 01:30:53.049480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.049529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.049543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.843 [2024-07-21 01:30:53.049557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.843 [2024-07-21 01:30:53.049567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.049625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.049638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.843 [2024-07-21 01:30:53.049648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.843 [2024-07-21 01:30:53.049665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.049687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.049698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.843 [2024-07-21 01:30:53.049707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.843 [2024-07-21 01:30:53.049726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.061848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.061894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.843 [2024-07-21 01:30:53.061907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.843 [2024-07-21 01:30:53.061918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.843 [2024-07-21 01:30:53.070177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.843 [2024-07-21 01:30:53.070216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.843 [2024-07-21 01:30:53.070228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.844 [2024-07-21 01:30:53.070313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.844 [2024-07-21 01:30:53.070368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.844 [2024-07-21 01:30:53.070475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:07.844 [2024-07-21 01:30:53.070551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.844 [2024-07-21 01:30:53.070635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:07.844 [2024-07-21 01:30:53.070704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.844 [2024-07-21 01:30:53.070714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:07.844 [2024-07-21 01:30:53.070724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.844 [2024-07-21 01:30:53.070863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 155.423 ms, result 0 00:21:08.790 00:21:08.790 00:21:08.790 01:30:53 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:08.790 [2024-07-21 01:30:53.874761] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:08.790 [2024-07-21 01:30:53.874897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91691 ] 00:21:08.790 [2024-07-21 01:30:54.038085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.790 [2024-07-21 01:30:54.079822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.050 [2024-07-21 01:30:54.180601] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.050 [2024-07-21 01:30:54.180684] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:09.050 [2024-07-21 01:30:54.331125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.331186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:09.050 [2024-07-21 01:30:54.331203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.050 [2024-07-21 01:30:54.331213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.331270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.331283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.050 [2024-07-21 01:30:54.331293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:09.050 [2024-07-21 01:30:54.331305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.331332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:09.050 [2024-07-21 01:30:54.331611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:09.050 [2024-07-21 01:30:54.331631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.331644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.050 [2024-07-21 01:30:54.331654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:21:09.050 [2024-07-21 01:30:54.331664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.333122] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:09.050 [2024-07-21 01:30:54.335605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.335645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:09.050 [2024-07-21 01:30:54.335677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:21:09.050 [2024-07-21 01:30:54.335694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.335761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.335774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:09.050 [2024-07-21 01:30:54.335784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:09.050 [2024-07-21 01:30:54.335811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.342488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.342520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.050 [2024-07-21 01:30:54.342531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.620 ms 00:21:09.050 [2024-07-21 01:30:54.342540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.342638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.342650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.050 [2024-07-21 01:30:54.342667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:09.050 [2024-07-21 01:30:54.342676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.342731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.342747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:09.050 [2024-07-21 01:30:54.342760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:09.050 [2024-07-21 01:30:54.342776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.342802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:09.050 [2024-07-21 01:30:54.344406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.344435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.050 [2024-07-21 01:30:54.344446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:21:09.050 [2024-07-21 01:30:54.344456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.344494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.344505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:09.050 [2024-07-21 01:30:54.344518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:09.050 [2024-07-21 01:30:54.344527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.344548] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:09.050 [2024-07-21 01:30:54.344571] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:09.050 [2024-07-21 01:30:54.344603] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:09.050 [2024-07-21 01:30:54.344620] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:09.050 [2024-07-21 01:30:54.344710] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:09.050 [2024-07-21 01:30:54.344726] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:09.050 [2024-07-21 01:30:54.344749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:09.050 [2024-07-21 01:30:54.344763] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:09.050 [2024-07-21 01:30:54.344775] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:09.050 [2024-07-21 01:30:54.344785] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:09.050 [2024-07-21 01:30:54.344795] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:09.050 [2024-07-21 01:30:54.344811] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:09.050 [2024-07-21 01:30:54.344843] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:09.050 [2024-07-21 01:30:54.344853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.344863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:09.050 [2024-07-21 01:30:54.344874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:09.050 [2024-07-21 01:30:54.344887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.344954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.050 [2024-07-21 01:30:54.344965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:09.050 [2024-07-21 01:30:54.344975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:09.050 [2024-07-21 01:30:54.344984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.050 [2024-07-21 01:30:54.345065] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:09.050 [2024-07-21 01:30:54.345078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:09.050 [2024-07-21 01:30:54.345088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.050 [2024-07-21 01:30:54.345098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.050 [2024-07-21 01:30:54.345111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:09.050 [2024-07-21 01:30:54.345121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:09.050 [2024-07-21 01:30:54.345130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:09.050 [2024-07-21 01:30:54.345139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:09.050 [2024-07-21 01:30:54.345148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.051 [2024-07-21 01:30:54.345167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:09.051 [2024-07-21 01:30:54.345176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:09.051 [2024-07-21 01:30:54.345195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:09.051 [2024-07-21 01:30:54.345204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:09.051 [2024-07-21 01:30:54.345214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:09.051 [2024-07-21 01:30:54.345223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:09.051 [2024-07-21 01:30:54.345246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:09.051 [2024-07-21 01:30:54.345273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:09.051 [2024-07-21 01:30:54.345300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:09.051 [2024-07-21 01:30:54.345326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:09.051 [2024-07-21 01:30:54.345352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:09.051 [2024-07-21 01:30:54.345381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.051 [2024-07-21 01:30:54.345399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:09.051 [2024-07-21 01:30:54.345408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:09.051 [2024-07-21 01:30:54.345417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:09.051 [2024-07-21 01:30:54.345426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:09.051 [2024-07-21 01:30:54.345435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:09.051 [2024-07-21 01:30:54.345444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:09.051 [2024-07-21 01:30:54.345461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:09.051 [2024-07-21 01:30:54.345470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345478] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:09.051 [2024-07-21 01:30:54.345488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:09.051 [2024-07-21 01:30:54.345497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:09.051 [2024-07-21 01:30:54.345516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:09.051 [2024-07-21 01:30:54.345528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:09.051 [2024-07-21 01:30:54.345537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:09.051 [2024-07-21 01:30:54.345545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:09.051 [2024-07-21 01:30:54.345554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:09.051 [2024-07-21 01:30:54.345564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:09.051 [2024-07-21 01:30:54.345574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:09.051 [2024-07-21 01:30:54.345586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:09.051 [2024-07-21 01:30:54.345607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:09.051 [2024-07-21 01:30:54.345617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:09.051 [2024-07-21 01:30:54.345627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:09.051 [2024-07-21 01:30:54.345637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:09.051 [2024-07-21 01:30:54.345646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:09.051 [2024-07-21 01:30:54.345656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:09.051 [2024-07-21 01:30:54.345667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:09.051 [2024-07-21 01:30:54.345677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:09.051 [2024-07-21 01:30:54.345692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:09.051 [2024-07-21 01:30:54.345742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:09.051 [2024-07-21 01:30:54.345753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:09.051 [2024-07-21 01:30:54.345774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:09.051 [2024-07-21 01:30:54.345784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:09.051 [2024-07-21 01:30:54.345794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:09.051 [2024-07-21 01:30:54.345805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.051 [2024-07-21 01:30:54.345821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:09.051 [2024-07-21 01:30:54.345855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:21:09.051 [2024-07-21 01:30:54.345865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.368293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.368336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.310 [2024-07-21 01:30:54.368363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.411 ms 00:21:09.310 [2024-07-21 01:30:54.368376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.368468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.368482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:09.310 [2024-07-21 01:30:54.368496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:09.310 [2024-07-21 01:30:54.368511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.378927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.378963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.310 [2024-07-21 01:30:54.378998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.367 ms 00:21:09.310 [2024-07-21 01:30:54.379008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.379046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.379057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.310 [2024-07-21 01:30:54.379068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:09.310 [2024-07-21 01:30:54.379088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.379531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.379551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.310 [2024-07-21 01:30:54.379562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:21:09.310 [2024-07-21 01:30:54.379572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.379679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.310 [2024-07-21 01:30:54.379695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.310 [2024-07-21 01:30:54.379706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:09.310 [2024-07-21 01:30:54.379716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.310 [2024-07-21 01:30:54.385532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.385565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.311 [2024-07-21 01:30:54.385593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.802 ms 00:21:09.311 [2024-07-21 01:30:54.385602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.388229] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:09.311 [2024-07-21 01:30:54.388261] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:09.311 [2024-07-21 01:30:54.388279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.388289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:09.311 [2024-07-21 01:30:54.388299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:21:09.311 [2024-07-21 01:30:54.388324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.400518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.400557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:09.311 [2024-07-21 01:30:54.400569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.174 ms 00:21:09.311 [2024-07-21 01:30:54.400578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.402314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.402344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:09.311 [2024-07-21 01:30:54.402356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:21:09.311 [2024-07-21 01:30:54.402366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.403791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.403822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:09.311 [2024-07-21 01:30:54.403844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:21:09.311 [2024-07-21 01:30:54.403853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.404126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.404143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:09.311 [2024-07-21 01:30:54.404154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:21:09.311 [2024-07-21 01:30:54.404167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.426203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.426261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:09.311 [2024-07-21 01:30:54.426300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.048 ms 00:21:09.311 [2024-07-21 01:30:54.426311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.432245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:09.311 [2024-07-21 01:30:54.434636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.434666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:09.311 [2024-07-21 01:30:54.434678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.297 ms 00:21:09.311 [2024-07-21 01:30:54.434695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.434775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.434787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:09.311 [2024-07-21 01:30:54.434809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:09.311 [2024-07-21 01:30:54.434824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.436150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.436208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:09.311 [2024-07-21 01:30:54.436221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:21:09.311 [2024-07-21 01:30:54.436232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.436258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.436278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:09.311 [2024-07-21 01:30:54.436288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:09.311 [2024-07-21 01:30:54.436297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.436336] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:09.311 [2024-07-21 01:30:54.436348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.436361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:09.311 [2024-07-21 01:30:54.436374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:09.311 [2024-07-21 01:30:54.436383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.439961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.439993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:09.311 [2024-07-21 01:30:54.440005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.549 ms 00:21:09.311 [2024-07-21 01:30:54.440016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.440078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.311 [2024-07-21 01:30:54.440090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:09.311 [2024-07-21 01:30:54.440101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:09.311 [2024-07-21 01:30:54.440126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.311 [2024-07-21 01:30:54.442532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.856 ms, result 0 00:21:51.550  Copying: 16/1024 [MB] (16 MBps) Copying: 41/1024 [MB] (25 MBps) Copying: 67/1024 [MB] (26 MBps) Copying: 93/1024 [MB] (26 MBps) Copying: 119/1024 [MB] (25 MBps) Copying: 145/1024 [MB] (26 MBps) Copying: 172/1024 [MB] (26 MBps) Copying: 199/1024 [MB] (26 MBps) Copying: 226/1024 [MB] (26 MBps) Copying: 252/1024 [MB] (26 MBps) Copying: 277/1024 [MB] (24 MBps) Copying: 301/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 350/1024 [MB] (24 MBps) Copying: 375/1024 [MB] (25 MBps) Copying: 400/1024 [MB] (24 MBps) Copying: 424/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 494/1024 [MB] (23 MBps) Copying: 518/1024 [MB] (24 MBps) Copying: 543/1024 [MB] (24 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 591/1024 [MB] (24 MBps) Copying: 615/1024 [MB] (24 MBps) Copying: 640/1024 [MB] (24 MBps) Copying: 664/1024 [MB] (24 MBps) Copying: 688/1024 [MB] (24 MBps) Copying: 712/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (23 MBps) Copying: 760/1024 [MB] (23 MBps) Copying: 783/1024 [MB] (23 MBps) Copying: 808/1024 [MB] (24 MBps) Copying: 832/1024 [MB] (24 MBps) Copying: 857/1024 [MB] (24 MBps) Copying: 881/1024 [MB] (23 MBps) Copying: 905/1024 [MB] (24 MBps) Copying: 929/1024 [MB] (24 MBps) Copying: 953/1024 [MB] (23 MBps) Copying: 976/1024 [MB] (23 MBps) Copying: 1001/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-21 01:31:36.568502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.550 [2024-07-21 01:31:36.568580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.550 [2024-07-21 01:31:36.568599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:51.550 [2024-07-21 01:31:36.568610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.550 [2024-07-21 01:31:36.568633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.550 [2024-07-21 01:31:36.570108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.550 [2024-07-21 01:31:36.570132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.550 [2024-07-21 01:31:36.570152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:21:51.550 [2024-07-21 01:31:36.570162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.550 [2024-07-21 01:31:36.570381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.550 [2024-07-21 01:31:36.570397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.551 [2024-07-21 01:31:36.570408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:21:51.551 [2024-07-21 01:31:36.570418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.577945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.578000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.551 [2024-07-21 01:31:36.578021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:21:51.551 [2024-07-21 01:31:36.578036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.583650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.583693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:51.551 [2024-07-21 01:31:36.583713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.592 ms 00:21:51.551 [2024-07-21 01:31:36.583723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.585634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.585670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.551 [2024-07-21 01:31:36.585682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:21:51.551 [2024-07-21 01:31:36.585692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.590658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.590693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.551 [2024-07-21 01:31:36.590705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.945 ms 00:21:51.551 [2024-07-21 01:31:36.590721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.757520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.757571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.551 [2024-07-21 01:31:36.757587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.037 ms 00:21:51.551 [2024-07-21 01:31:36.757598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.759879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.759908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:51.551 [2024-07-21 01:31:36.759919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:21:51.551 [2024-07-21 01:31:36.759929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.761691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.761721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:51.551 [2024-07-21 01:31:36.761731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:21:51.551 [2024-07-21 01:31:36.761741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.762997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.763035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.551 [2024-07-21 01:31:36.763047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:21:51.551 [2024-07-21 01:31:36.763056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.764251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.551 [2024-07-21 01:31:36.764280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.551 [2024-07-21 01:31:36.764290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:21:51.551 [2024-07-21 01:31:36.764299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.551 [2024-07-21 01:31:36.764331] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.551 [2024-07-21 01:31:36.764361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:51.551 [2024-07-21 01:31:36.764377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.551 [2024-07-21 01:31:36.764981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.764991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.552 [2024-07-21 01:31:36.765399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.552 [2024-07-21 01:31:36.765409] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ce280a0b-3e48-42ae-87a9-3fe790281080 00:21:51.552 [2024-07-21 01:31:36.765420] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:51.552 [2024-07-21 01:31:36.765435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 68288 00:21:51.552 [2024-07-21 01:31:36.765444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 67328 00:21:51.552 [2024-07-21 01:31:36.765454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0143 00:21:51.552 [2024-07-21 01:31:36.765464] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.552 [2024-07-21 01:31:36.765473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.552 [2024-07-21 01:31:36.765482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.552 [2024-07-21 01:31:36.765491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.552 [2024-07-21 01:31:36.765498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.552 [2024-07-21 01:31:36.765508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.552 [2024-07-21 01:31:36.765518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.552 [2024-07-21 01:31:36.765528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:21:51.552 [2024-07-21 01:31:36.765537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.768328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.552 [2024-07-21 01:31:36.768350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.552 [2024-07-21 01:31:36.768361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:21:51.552 [2024-07-21 01:31:36.768370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.768534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.552 [2024-07-21 01:31:36.768548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.552 [2024-07-21 01:31:36.768564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:51.552 [2024-07-21 01:31:36.768577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.777451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.777477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.552 [2024-07-21 01:31:36.777490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.777508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.777561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.777573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.552 [2024-07-21 01:31:36.777583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.777598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.777667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.777681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.552 [2024-07-21 01:31:36.777692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.777702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.777720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.777730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.552 [2024-07-21 01:31:36.777748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.777759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.797140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.797170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.552 [2024-07-21 01:31:36.797183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.797193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.552 [2024-07-21 01:31:36.810129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.552 [2024-07-21 01:31:36.810210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.552 [2024-07-21 01:31:36.810288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.552 [2024-07-21 01:31:36.810413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.552 [2024-07-21 01:31:36.810478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.552 [2024-07-21 01:31:36.810589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.552 [2024-07-21 01:31:36.810599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.552 [2024-07-21 01:31:36.810659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.552 [2024-07-21 01:31:36.810672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.552 [2024-07-21 01:31:36.810682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.553 [2024-07-21 01:31:36.810693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.553 [2024-07-21 01:31:36.810889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 242.697 ms, result 0 00:21:52.119 00:21:52.119 00:21:52.119 01:31:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:54.018 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:54.018 01:31:38 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:54.018 01:31:38 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:54.018 01:31:38 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90091 00:21:54.018 01:31:39 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90091 ']' 00:21:54.018 01:31:39 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90091 00:21:54.018 Process with pid 90091 is not found 00:21:54.018 Remove shared memory files 00:21:54.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90091) - No such process 00:21:54.018 01:31:39 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 90091 is not found' 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:54.018 01:31:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:54.018 00:21:54.018 real 3m17.587s 00:21:54.018 user 3m5.432s 00:21:54.018 sys 0m12.956s 00:21:54.018 01:31:39 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:54.018 01:31:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:54.018 ************************************ 00:21:54.018 END TEST ftl_restore 00:21:54.018 ************************************ 00:21:54.018 01:31:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:54.018 01:31:39 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:54.018 01:31:39 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:54.018 01:31:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:54.018 ************************************ 00:21:54.019 START TEST ftl_dirty_shutdown 00:21:54.019 ************************************ 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:54.019 * Looking for test storage... 00:21:54.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:54.019 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92210 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92210 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 92210 ']' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:54.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:54.277 01:31:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.277 [2024-07-21 01:31:39.444974] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:54.277 [2024-07-21 01:31:39.445141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92210 ] 00:21:54.535 [2024-07-21 01:31:39.605764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.535 [2024-07-21 01:31:39.675355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:55.100 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:55.358 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:55.616 { 00:21:55.616 "name": "nvme0n1", 00:21:55.616 "aliases": [ 00:21:55.616 "94c807ef-ff0c-4f48-a4e5-15cb3ff9434a" 00:21:55.616 ], 00:21:55.616 "product_name": "NVMe disk", 00:21:55.616 "block_size": 4096, 00:21:55.616 "num_blocks": 1310720, 00:21:55.616 "uuid": "94c807ef-ff0c-4f48-a4e5-15cb3ff9434a", 00:21:55.616 "assigned_rate_limits": { 00:21:55.616 "rw_ios_per_sec": 0, 00:21:55.616 "rw_mbytes_per_sec": 0, 00:21:55.616 "r_mbytes_per_sec": 0, 00:21:55.616 "w_mbytes_per_sec": 0 00:21:55.616 }, 00:21:55.616 "claimed": true, 00:21:55.616 "claim_type": "read_many_write_one", 00:21:55.616 "zoned": false, 00:21:55.616 "supported_io_types": { 00:21:55.616 "read": true, 00:21:55.616 "write": true, 00:21:55.616 "unmap": true, 00:21:55.616 "write_zeroes": true, 00:21:55.616 "flush": true, 00:21:55.616 "reset": true, 00:21:55.616 "compare": true, 00:21:55.616 "compare_and_write": false, 00:21:55.616 "abort": true, 00:21:55.616 "nvme_admin": true, 00:21:55.616 "nvme_io": true 00:21:55.616 }, 00:21:55.616 "driver_specific": { 00:21:55.616 "nvme": [ 00:21:55.616 { 00:21:55.616 "pci_address": "0000:00:11.0", 00:21:55.616 "trid": { 00:21:55.616 "trtype": "PCIe", 00:21:55.616 "traddr": "0000:00:11.0" 00:21:55.616 }, 00:21:55.616 "ctrlr_data": { 00:21:55.616 "cntlid": 0, 00:21:55.616 "vendor_id": "0x1b36", 00:21:55.616 "model_number": "QEMU NVMe Ctrl", 00:21:55.616 "serial_number": "12341", 00:21:55.616 "firmware_revision": "8.0.0", 00:21:55.616 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:55.616 "oacs": { 00:21:55.616 "security": 0, 00:21:55.616 "format": 1, 00:21:55.616 "firmware": 0, 00:21:55.616 "ns_manage": 1 00:21:55.616 }, 00:21:55.616 "multi_ctrlr": false, 00:21:55.616 "ana_reporting": false 00:21:55.616 }, 00:21:55.616 "vs": { 00:21:55.616 "nvme_version": "1.4" 00:21:55.616 }, 00:21:55.616 "ns_data": { 00:21:55.616 "id": 1, 00:21:55.616 "can_share": false 00:21:55.616 } 00:21:55.616 } 00:21:55.616 ], 00:21:55.616 "mp_policy": "active_passive" 00:21:55.616 } 00:21:55.616 } 00:21:55.616 ]' 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:55.616 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:55.874 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5b5d56f9-eca3-4ab4-ac48-f64980d3bda6 00:21:55.874 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:55.874 01:31:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b5d56f9-eca3-4ab4-ac48-f64980d3bda6 00:21:56.132 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:56.132 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=efdab90f-90f7-4b36-8b6c-7965da74df1b 00:21:56.132 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u efdab90f-90f7-4b36-8b6c-7965da74df1b 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:56.390 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:56.648 { 00:21:56.648 "name": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:56.648 "aliases": [ 00:21:56.648 "lvs/nvme0n1p0" 00:21:56.648 ], 00:21:56.648 "product_name": "Logical Volume", 00:21:56.648 "block_size": 4096, 00:21:56.648 "num_blocks": 26476544, 00:21:56.648 "uuid": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:56.648 "assigned_rate_limits": { 00:21:56.648 "rw_ios_per_sec": 0, 00:21:56.648 "rw_mbytes_per_sec": 0, 00:21:56.648 "r_mbytes_per_sec": 0, 00:21:56.648 "w_mbytes_per_sec": 0 00:21:56.648 }, 00:21:56.648 "claimed": false, 00:21:56.648 "zoned": false, 00:21:56.648 "supported_io_types": { 00:21:56.648 "read": true, 00:21:56.648 "write": true, 00:21:56.648 "unmap": true, 00:21:56.648 "write_zeroes": true, 00:21:56.648 "flush": false, 00:21:56.648 "reset": true, 00:21:56.648 "compare": false, 00:21:56.648 "compare_and_write": false, 00:21:56.648 "abort": false, 00:21:56.648 "nvme_admin": false, 00:21:56.648 "nvme_io": false 00:21:56.648 }, 00:21:56.648 "driver_specific": { 00:21:56.648 "lvol": { 00:21:56.648 "lvol_store_uuid": "efdab90f-90f7-4b36-8b6c-7965da74df1b", 00:21:56.648 "base_bdev": "nvme0n1", 00:21:56.648 "thin_provision": true, 00:21:56.648 "num_allocated_clusters": 0, 00:21:56.648 "snapshot": false, 00:21:56.648 "clone": false, 00:21:56.648 "esnap_clone": false 00:21:56.648 } 00:21:56.648 } 00:21:56.648 } 00:21:56.648 ]' 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:56.648 01:31:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:56.906 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:57.164 { 00:21:57.164 "name": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:57.164 "aliases": [ 00:21:57.164 "lvs/nvme0n1p0" 00:21:57.164 ], 00:21:57.164 "product_name": "Logical Volume", 00:21:57.164 "block_size": 4096, 00:21:57.164 "num_blocks": 26476544, 00:21:57.164 "uuid": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:57.164 "assigned_rate_limits": { 00:21:57.164 "rw_ios_per_sec": 0, 00:21:57.164 "rw_mbytes_per_sec": 0, 00:21:57.164 "r_mbytes_per_sec": 0, 00:21:57.164 "w_mbytes_per_sec": 0 00:21:57.164 }, 00:21:57.164 "claimed": false, 00:21:57.164 "zoned": false, 00:21:57.164 "supported_io_types": { 00:21:57.164 "read": true, 00:21:57.164 "write": true, 00:21:57.164 "unmap": true, 00:21:57.164 "write_zeroes": true, 00:21:57.164 "flush": false, 00:21:57.164 "reset": true, 00:21:57.164 "compare": false, 00:21:57.164 "compare_and_write": false, 00:21:57.164 "abort": false, 00:21:57.164 "nvme_admin": false, 00:21:57.164 "nvme_io": false 00:21:57.164 }, 00:21:57.164 "driver_specific": { 00:21:57.164 "lvol": { 00:21:57.164 "lvol_store_uuid": "efdab90f-90f7-4b36-8b6c-7965da74df1b", 00:21:57.164 "base_bdev": "nvme0n1", 00:21:57.164 "thin_provision": true, 00:21:57.164 "num_allocated_clusters": 0, 00:21:57.164 "snapshot": false, 00:21:57.164 "clone": false, 00:21:57.164 "esnap_clone": false 00:21:57.164 } 00:21:57.164 } 00:21:57.164 } 00:21:57.164 ]' 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:57.164 01:31:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:57.423 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5dfce789-d3c0-4c38-a364-17485653e2ab 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:57.682 { 00:21:57.682 "name": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:57.682 "aliases": [ 00:21:57.682 "lvs/nvme0n1p0" 00:21:57.682 ], 00:21:57.682 "product_name": "Logical Volume", 00:21:57.682 "block_size": 4096, 00:21:57.682 "num_blocks": 26476544, 00:21:57.682 "uuid": "5dfce789-d3c0-4c38-a364-17485653e2ab", 00:21:57.682 "assigned_rate_limits": { 00:21:57.682 "rw_ios_per_sec": 0, 00:21:57.682 "rw_mbytes_per_sec": 0, 00:21:57.682 "r_mbytes_per_sec": 0, 00:21:57.682 "w_mbytes_per_sec": 0 00:21:57.682 }, 00:21:57.682 "claimed": false, 00:21:57.682 "zoned": false, 00:21:57.682 "supported_io_types": { 00:21:57.682 "read": true, 00:21:57.682 "write": true, 00:21:57.682 "unmap": true, 00:21:57.682 "write_zeroes": true, 00:21:57.682 "flush": false, 00:21:57.682 "reset": true, 00:21:57.682 "compare": false, 00:21:57.682 "compare_and_write": false, 00:21:57.682 "abort": false, 00:21:57.682 "nvme_admin": false, 00:21:57.682 "nvme_io": false 00:21:57.682 }, 00:21:57.682 "driver_specific": { 00:21:57.682 "lvol": { 00:21:57.682 "lvol_store_uuid": "efdab90f-90f7-4b36-8b6c-7965da74df1b", 00:21:57.682 "base_bdev": "nvme0n1", 00:21:57.682 "thin_provision": true, 00:21:57.682 "num_allocated_clusters": 0, 00:21:57.682 "snapshot": false, 00:21:57.682 "clone": false, 00:21:57.682 "esnap_clone": false 00:21:57.682 } 00:21:57.682 } 00:21:57.682 } 00:21:57.682 ]' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5dfce789-d3c0-4c38-a364-17485653e2ab --l2p_dram_limit 10' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:57.682 01:31:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5dfce789-d3c0-4c38-a364-17485653e2ab --l2p_dram_limit 10 -c nvc0n1p0 00:21:57.940 [2024-07-21 01:31:43.093489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.093555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.940 [2024-07-21 01:31:43.093577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:57.940 [2024-07-21 01:31:43.093590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.093669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.093683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.940 [2024-07-21 01:31:43.093698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:57.940 [2024-07-21 01:31:43.093712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.093743] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.940 [2024-07-21 01:31:43.094159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.940 [2024-07-21 01:31:43.094196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.094208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.940 [2024-07-21 01:31:43.094223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:21:57.940 [2024-07-21 01:31:43.094233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.094360] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 404f075b-936e-43b2-b517-a017db49f2ec 00:21:57.940 [2024-07-21 01:31:43.096757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.096800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:57.940 [2024-07-21 01:31:43.096817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:57.940 [2024-07-21 01:31:43.096845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.110250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.110288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.940 [2024-07-21 01:31:43.110303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.357 ms 00:21:57.940 [2024-07-21 01:31:43.110316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.110411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.940 [2024-07-21 01:31:43.110435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.940 [2024-07-21 01:31:43.110454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:57.940 [2024-07-21 01:31:43.110468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.940 [2024-07-21 01:31:43.110543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.110559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.941 [2024-07-21 01:31:43.110570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:57.941 [2024-07-21 01:31:43.110583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.110614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.941 [2024-07-21 01:31:43.113418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.113457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.941 [2024-07-21 01:31:43.113474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:21:57.941 [2024-07-21 01:31:43.113485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.113528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.113540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.941 [2024-07-21 01:31:43.113554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:57.941 [2024-07-21 01:31:43.113565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.113610] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:57.941 [2024-07-21 01:31:43.113757] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.941 [2024-07-21 01:31:43.113781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.941 [2024-07-21 01:31:43.113804] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:57.941 [2024-07-21 01:31:43.113822] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.941 [2024-07-21 01:31:43.113856] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.941 [2024-07-21 01:31:43.113879] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.941 [2024-07-21 01:31:43.113889] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.941 [2024-07-21 01:31:43.113907] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.941 [2024-07-21 01:31:43.113919] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.941 [2024-07-21 01:31:43.113933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.113943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.941 [2024-07-21 01:31:43.113968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:21:57.941 [2024-07-21 01:31:43.113978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.114063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.114078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.941 [2024-07-21 01:31:43.114285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:57.941 [2024-07-21 01:31:43.114301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.114399] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.941 [2024-07-21 01:31:43.114418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.941 [2024-07-21 01:31:43.114433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.941 [2024-07-21 01:31:43.114467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.941 [2024-07-21 01:31:43.114502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.941 [2024-07-21 01:31:43.114523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.941 [2024-07-21 01:31:43.114533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.941 [2024-07-21 01:31:43.114546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.941 [2024-07-21 01:31:43.114556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.941 [2024-07-21 01:31:43.114572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.941 [2024-07-21 01:31:43.114581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.941 [2024-07-21 01:31:43.114604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.941 [2024-07-21 01:31:43.114640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.941 [2024-07-21 01:31:43.114674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.941 [2024-07-21 01:31:43.114710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.941 [2024-07-21 01:31:43.114741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.941 [2024-07-21 01:31:43.114778] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.941 [2024-07-21 01:31:43.114800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.941 [2024-07-21 01:31:43.114810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.941 [2024-07-21 01:31:43.114821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.941 [2024-07-21 01:31:43.114846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.941 [2024-07-21 01:31:43.114859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.941 [2024-07-21 01:31:43.114868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.941 [2024-07-21 01:31:43.114891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.941 [2024-07-21 01:31:43.114903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114912] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.941 [2024-07-21 01:31:43.114926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.941 [2024-07-21 01:31:43.114937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.941 [2024-07-21 01:31:43.114953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.941 [2024-07-21 01:31:43.114969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.941 [2024-07-21 01:31:43.114981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.941 [2024-07-21 01:31:43.114991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.941 [2024-07-21 01:31:43.115004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.941 [2024-07-21 01:31:43.115014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.941 [2024-07-21 01:31:43.115027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.941 [2024-07-21 01:31:43.115043] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.941 [2024-07-21 01:31:43.115060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.941 [2024-07-21 01:31:43.115091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.941 [2024-07-21 01:31:43.115102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.941 [2024-07-21 01:31:43.115116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.941 [2024-07-21 01:31:43.115128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.941 [2024-07-21 01:31:43.115141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.941 [2024-07-21 01:31:43.115153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.941 [2024-07-21 01:31:43.115171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.941 [2024-07-21 01:31:43.115183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.941 [2024-07-21 01:31:43.115196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.941 [2024-07-21 01:31:43.115255] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.941 [2024-07-21 01:31:43.115270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.941 [2024-07-21 01:31:43.115297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.941 [2024-07-21 01:31:43.115307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.941 [2024-07-21 01:31:43.115321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.941 [2024-07-21 01:31:43.115333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.941 [2024-07-21 01:31:43.115347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.941 [2024-07-21 01:31:43.115357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:21:57.941 [2024-07-21 01:31:43.115373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.941 [2024-07-21 01:31:43.115440] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:57.941 [2024-07-21 01:31:43.115462] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:04.505 [2024-07-21 01:31:48.672066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.672158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:04.505 [2024-07-21 01:31:48.672178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5565.644 ms 00:22:04.505 [2024-07-21 01:31:48.672192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.691843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.691909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.505 [2024-07-21 01:31:48.691942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.546 ms 00:22:04.505 [2024-07-21 01:31:48.691957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.692059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.692083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:04.505 [2024-07-21 01:31:48.692108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:04.505 [2024-07-21 01:31:48.692123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.708950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.709006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.505 [2024-07-21 01:31:48.709029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.783 ms 00:22:04.505 [2024-07-21 01:31:48.709051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.709102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.709119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.505 [2024-07-21 01:31:48.709130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:04.505 [2024-07-21 01:31:48.709144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.709959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.709983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.505 [2024-07-21 01:31:48.709996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:22:04.505 [2024-07-21 01:31:48.710010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.710125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.710153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.505 [2024-07-21 01:31:48.710171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:04.505 [2024-07-21 01:31:48.710185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.722056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.722102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.505 [2024-07-21 01:31:48.722117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.866 ms 00:22:04.505 [2024-07-21 01:31:48.722140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.731750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:04.505 [2024-07-21 01:31:48.737012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.737043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:04.505 [2024-07-21 01:31:48.737060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.797 ms 00:22:04.505 [2024-07-21 01:31:48.737071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.919838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.919912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:04.505 [2024-07-21 01:31:48.919938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 183.007 ms 00:22:04.505 [2024-07-21 01:31:48.919949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.920163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.920181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:04.505 [2024-07-21 01:31:48.920196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:22:04.505 [2024-07-21 01:31:48.920207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.924212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.924245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:04.505 [2024-07-21 01:31:48.924262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.969 ms 00:22:04.505 [2024-07-21 01:31:48.924292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.927245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.927273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:04.505 [2024-07-21 01:31:48.927289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:22:04.505 [2024-07-21 01:31:48.927298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.927575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.927593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:04.505 [2024-07-21 01:31:48.927608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:04.505 [2024-07-21 01:31:48.927618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.979727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.979775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:04.505 [2024-07-21 01:31:48.979793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.160 ms 00:22:04.505 [2024-07-21 01:31:48.979808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.985499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.985533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:04.505 [2024-07-21 01:31:48.985550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.635 ms 00:22:04.505 [2024-07-21 01:31:48.985561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.505 [2024-07-21 01:31:48.989204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.505 [2024-07-21 01:31:48.989232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:04.505 [2024-07-21 01:31:48.989247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.605 ms 00:22:04.506 [2024-07-21 01:31:48.989257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.506 [2024-07-21 01:31:48.992855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.506 [2024-07-21 01:31:48.992884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:04.506 [2024-07-21 01:31:48.992900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:22:04.506 [2024-07-21 01:31:48.992910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.506 [2024-07-21 01:31:48.992960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.506 [2024-07-21 01:31:48.992973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:04.506 [2024-07-21 01:31:48.992987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:04.506 [2024-07-21 01:31:48.992998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.506 [2024-07-21 01:31:48.993076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.506 [2024-07-21 01:31:48.993088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:04.506 [2024-07-21 01:31:48.993102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:04.506 [2024-07-21 01:31:48.993113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.506 [2024-07-21 01:31:48.994424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5910.082 ms, result 0 00:22:04.506 { 00:22:04.506 "name": "ftl0", 00:22:04.506 "uuid": "404f075b-936e-43b2-b517-a017db49f2ec" 00:22:04.506 } 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:04.506 /dev/nbd0 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:04.506 1+0 records in 00:22:04.506 1+0 records out 00:22:04.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401264 s, 10.2 MB/s 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:22:04.506 01:31:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:04.506 [2024-07-21 01:31:49.600771] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:04.506 [2024-07-21 01:31:49.600921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92374 ] 00:22:04.506 [2024-07-21 01:31:49.774330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.765 [2024-07-21 01:31:49.846654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.203  Copying: 207/1024 [MB] (207 MBps) Copying: 416/1024 [MB] (208 MBps) Copying: 625/1024 [MB] (209 MBps) Copying: 831/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 206 MBps) 00:22:10.203 00:22:10.203 01:31:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:12.107 01:31:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:12.107 [2024-07-21 01:31:57.102488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:12.107 [2024-07-21 01:31:57.102621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92455 ] 00:22:12.107 [2024-07-21 01:31:57.264366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.107 [2024-07-21 01:31:57.334577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.462  Copying: 15/1024 [MB] (15 MBps) Copying: 32/1024 [MB] (16 MBps) Copying: 48/1024 [MB] (15 MBps) Copying: 62/1024 [MB] (14 MBps) Copying: 77/1024 [MB] (14 MBps) Copying: 92/1024 [MB] (15 MBps) Copying: 108/1024 [MB] (15 MBps) Copying: 124/1024 [MB] (15 MBps) Copying: 140/1024 [MB] (15 MBps) Copying: 155/1024 [MB] (15 MBps) Copying: 171/1024 [MB] (15 MBps) Copying: 188/1024 [MB] (16 MBps) Copying: 205/1024 [MB] (17 MBps) Copying: 222/1024 [MB] (16 MBps) Copying: 239/1024 [MB] (17 MBps) Copying: 256/1024 [MB] (17 MBps) Copying: 274/1024 [MB] (17 MBps) Copying: 291/1024 [MB] (17 MBps) Copying: 309/1024 [MB] (18 MBps) Copying: 327/1024 [MB] (17 MBps) Copying: 345/1024 [MB] (17 MBps) Copying: 363/1024 [MB] (17 MBps) Copying: 381/1024 [MB] (17 MBps) Copying: 398/1024 [MB] (17 MBps) Copying: 416/1024 [MB] (17 MBps) Copying: 433/1024 [MB] (17 MBps) Copying: 451/1024 [MB] (17 MBps) Copying: 468/1024 [MB] (17 MBps) Copying: 485/1024 [MB] (17 MBps) Copying: 503/1024 [MB] (17 MBps) Copying: 520/1024 [MB] (17 MBps) Copying: 538/1024 [MB] (17 MBps) Copying: 555/1024 [MB] (17 MBps) Copying: 573/1024 [MB] (17 MBps) Copying: 590/1024 [MB] (17 MBps) Copying: 608/1024 [MB] (17 MBps) Copying: 625/1024 [MB] (17 MBps) Copying: 643/1024 [MB] (17 MBps) Copying: 660/1024 [MB] (17 MBps) Copying: 677/1024 [MB] (17 MBps) Copying: 694/1024 [MB] (16 MBps) Copying: 711/1024 [MB] (16 MBps) Copying: 728/1024 [MB] (17 MBps) Copying: 746/1024 [MB] (17 MBps) Copying: 763/1024 [MB] (17 MBps) Copying: 781/1024 [MB] (17 MBps) Copying: 799/1024 [MB] (17 MBps) Copying: 817/1024 [MB] (17 MBps) Copying: 835/1024 [MB] (17 MBps) Copying: 853/1024 [MB] (17 MBps) Copying: 870/1024 [MB] (17 MBps) Copying: 887/1024 [MB] (17 MBps) Copying: 904/1024 [MB] (17 MBps) Copying: 922/1024 [MB] (17 MBps) Copying: 939/1024 [MB] (16 MBps) Copying: 956/1024 [MB] (17 MBps) Copying: 973/1024 [MB] (17 MBps) Copying: 990/1024 [MB] (16 MBps) Copying: 1007/1024 [MB] (16 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 17 MBps) 00:23:12.462 00:23:12.462 01:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:12.462 01:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:12.722 01:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:12.722 [2024-07-21 01:32:57.989125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:57.989183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:12.722 [2024-07-21 01:32:57.989200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:12.722 [2024-07-21 01:32:57.989213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:57.989238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:12.722 [2024-07-21 01:32:57.990413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:57.990433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:12.722 [2024-07-21 01:32:57.990452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:23:12.722 [2024-07-21 01:32:57.990463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:57.992592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:57.992639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:12.722 [2024-07-21 01:32:57.992655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:23:12.722 [2024-07-21 01:32:57.992666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.005937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.005976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:12.722 [2024-07-21 01:32:58.005996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.257 ms 00:23:12.722 [2024-07-21 01:32:58.006006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.010878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.010910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:12.722 [2024-07-21 01:32:58.010933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:23:12.722 [2024-07-21 01:32:58.010943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.012541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.012576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:12.722 [2024-07-21 01:32:58.012594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:23:12.722 [2024-07-21 01:32:58.012603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.018048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.018084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:12.722 [2024-07-21 01:32:58.018102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.417 ms 00:23:12.722 [2024-07-21 01:32:58.018112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.018223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.018236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:12.722 [2024-07-21 01:32:58.018250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:12.722 [2024-07-21 01:32:58.018260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.020498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.020529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:12.722 [2024-07-21 01:32:58.020544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:23:12.722 [2024-07-21 01:32:58.020553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.022097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.022129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:12.722 [2024-07-21 01:32:58.022148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:23:12.722 [2024-07-21 01:32:58.022157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.023377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.023408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:12.722 [2024-07-21 01:32:58.023423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:23:12.722 [2024-07-21 01:32:58.023432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.024576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.722 [2024-07-21 01:32:58.024607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:12.722 [2024-07-21 01:32:58.024621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:23:12.722 [2024-07-21 01:32:58.024631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.722 [2024-07-21 01:32:58.024660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:12.722 [2024-07-21 01:32:58.024678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.024994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:12.722 [2024-07-21 01:32:58.025224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.025993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.026005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.026019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.026029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.026043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:12.723 [2024-07-21 01:32:58.026061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:12.723 [2024-07-21 01:32:58.026078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 404f075b-936e-43b2-b517-a017db49f2ec 00:23:12.723 [2024-07-21 01:32:58.026090] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:12.723 [2024-07-21 01:32:58.026103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:12.723 [2024-07-21 01:32:58.026113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:12.723 [2024-07-21 01:32:58.026126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:12.723 [2024-07-21 01:32:58.026136] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:12.723 [2024-07-21 01:32:58.026150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:12.723 [2024-07-21 01:32:58.026160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:12.723 [2024-07-21 01:32:58.026172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:12.723 [2024-07-21 01:32:58.026181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:12.723 [2024-07-21 01:32:58.026194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.723 [2024-07-21 01:32:58.026212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:12.723 [2024-07-21 01:32:58.026226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:23:12.723 [2024-07-21 01:32:58.026239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.723 [2024-07-21 01:32:58.028963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.723 [2024-07-21 01:32:58.028986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:12.723 [2024-07-21 01:32:58.029003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:23:12.723 [2024-07-21 01:32:58.029014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.723 [2024-07-21 01:32:58.029172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.723 [2024-07-21 01:32:58.029188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:12.723 [2024-07-21 01:32:58.029202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:23:12.723 [2024-07-21 01:32:58.029212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.039524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.039555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.983 [2024-07-21 01:32:58.039571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.039582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.039651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.039665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.983 [2024-07-21 01:32:58.039679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.039689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.039791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.039805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.983 [2024-07-21 01:32:58.039822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.039833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.039868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.039881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.983 [2024-07-21 01:32:58.039899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.039909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.057913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.057954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.983 [2024-07-21 01:32:58.057970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.057981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.070358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.070392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.983 [2024-07-21 01:32:58.070411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.070426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.070517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.070532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.983 [2024-07-21 01:32:58.070549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.070559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.070612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.070623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.983 [2024-07-21 01:32:58.070636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.070646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.070746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.070759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.983 [2024-07-21 01:32:58.070772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.070781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.983 [2024-07-21 01:32:58.070840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.983 [2024-07-21 01:32:58.070853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.983 [2024-07-21 01:32:58.070867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.983 [2024-07-21 01:32:58.070877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.984 [2024-07-21 01:32:58.070930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.984 [2024-07-21 01:32:58.070941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.984 [2024-07-21 01:32:58.070958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.984 [2024-07-21 01:32:58.070968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.984 [2024-07-21 01:32:58.071022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.984 [2024-07-21 01:32:58.071034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.984 [2024-07-21 01:32:58.071047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.984 [2024-07-21 01:32:58.071056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.984 [2024-07-21 01:32:58.071219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.173 ms, result 0 00:23:12.984 true 00:23:12.984 01:32:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92210 00:23:12.984 01:32:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92210 00:23:12.984 01:32:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:12.984 [2024-07-21 01:32:58.188408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.984 [2024-07-21 01:32:58.188555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93083 ] 00:23:13.243 [2024-07-21 01:32:58.353792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.243 [2024-07-21 01:32:58.415083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.623  Copying: 216/1024 [MB] (216 MBps) Copying: 428/1024 [MB] (211 MBps) Copying: 638/1024 [MB] (210 MBps) Copying: 846/1024 [MB] (208 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:23:18.623 00:23:18.623 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92210 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:18.623 01:33:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.623 [2024-07-21 01:33:03.823475] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:18.623 [2024-07-21 01:33:03.823602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93146 ] 00:23:18.882 [2024-07-21 01:33:03.994795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.882 [2024-07-21 01:33:04.069358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.140 [2024-07-21 01:33:04.217458] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.140 [2024-07-21 01:33:04.217547] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.140 [2024-07-21 01:33:04.279756] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:19.140 [2024-07-21 01:33:04.280275] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:19.140 [2024-07-21 01:33:04.280554] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:19.400 [2024-07-21 01:33:04.598655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.400 [2024-07-21 01:33:04.598706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:19.400 [2024-07-21 01:33:04.598730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.400 [2024-07-21 01:33:04.598740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.400 [2024-07-21 01:33:04.598827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.400 [2024-07-21 01:33:04.598853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.400 [2024-07-21 01:33:04.598865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:19.400 [2024-07-21 01:33:04.598875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.400 [2024-07-21 01:33:04.598897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:19.400 [2024-07-21 01:33:04.599151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:19.400 [2024-07-21 01:33:04.599176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.400 [2024-07-21 01:33:04.599195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.400 [2024-07-21 01:33:04.599207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:19.400 [2024-07-21 01:33:04.599217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.400 [2024-07-21 01:33:04.601597] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:19.400 [2024-07-21 01:33:04.605246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.400 [2024-07-21 01:33:04.605283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:19.400 [2024-07-21 01:33:04.605297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:23:19.400 [2024-07-21 01:33:04.605308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.400 [2024-07-21 01:33:04.605372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.400 [2024-07-21 01:33:04.605389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:19.401 [2024-07-21 01:33:04.605401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:19.401 [2024-07-21 01:33:04.605411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.617479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.617507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.401 [2024-07-21 01:33:04.617521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.024 ms 00:23:19.401 [2024-07-21 01:33:04.617541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.617643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.617657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.401 [2024-07-21 01:33:04.617670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:19.401 [2024-07-21 01:33:04.617694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.617756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.617769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:19.401 [2024-07-21 01:33:04.617780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:19.401 [2024-07-21 01:33:04.617789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.617814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.401 [2024-07-21 01:33:04.620410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.620434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.401 [2024-07-21 01:33:04.620459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:23:19.401 [2024-07-21 01:33:04.620469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.620501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.620512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:19.401 [2024-07-21 01:33:04.620523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:19.401 [2024-07-21 01:33:04.620535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.620558] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:19.401 [2024-07-21 01:33:04.620586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:19.401 [2024-07-21 01:33:04.620623] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:19.401 [2024-07-21 01:33:04.620646] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:19.401 [2024-07-21 01:33:04.620775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:19.401 [2024-07-21 01:33:04.620794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:19.401 [2024-07-21 01:33:04.620807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:19.401 [2024-07-21 01:33:04.620822] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:19.401 [2024-07-21 01:33:04.620835] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:19.401 [2024-07-21 01:33:04.620861] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:19.401 [2024-07-21 01:33:04.620872] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:19.401 [2024-07-21 01:33:04.620882] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:19.401 [2024-07-21 01:33:04.620898] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:19.401 [2024-07-21 01:33:04.620910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.620921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:19.401 [2024-07-21 01:33:04.620936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:23:19.401 [2024-07-21 01:33:04.620946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.621016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.401 [2024-07-21 01:33:04.621028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:19.401 [2024-07-21 01:33:04.621038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:19.401 [2024-07-21 01:33:04.621057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.401 [2024-07-21 01:33:04.621147] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:19.401 [2024-07-21 01:33:04.621165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:19.401 [2024-07-21 01:33:04.621177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:19.401 [2024-07-21 01:33:04.621209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:19.401 [2024-07-21 01:33:04.621251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.401 [2024-07-21 01:33:04.621280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:19.401 [2024-07-21 01:33:04.621291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:19.401 [2024-07-21 01:33:04.621300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.401 [2024-07-21 01:33:04.621310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:19.401 [2024-07-21 01:33:04.621319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:19.401 [2024-07-21 01:33:04.621329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:19.401 [2024-07-21 01:33:04.621348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:19.401 [2024-07-21 01:33:04.621378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:19.401 [2024-07-21 01:33:04.621406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:19.401 [2024-07-21 01:33:04.621444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:19.401 [2024-07-21 01:33:04.621474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:19.401 [2024-07-21 01:33:04.621503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.401 [2024-07-21 01:33:04.621522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:19.401 [2024-07-21 01:33:04.621531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:19.401 [2024-07-21 01:33:04.621540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.401 [2024-07-21 01:33:04.621549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:19.401 [2024-07-21 01:33:04.621558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:19.401 [2024-07-21 01:33:04.621568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:19.401 [2024-07-21 01:33:04.621589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:19.401 [2024-07-21 01:33:04.621603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621612] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:19.401 [2024-07-21 01:33:04.621623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:19.401 [2024-07-21 01:33:04.621633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.401 [2024-07-21 01:33:04.621653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:19.401 [2024-07-21 01:33:04.621663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:19.401 [2024-07-21 01:33:04.621672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:19.401 [2024-07-21 01:33:04.621682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:19.401 [2024-07-21 01:33:04.621692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:19.401 [2024-07-21 01:33:04.621702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:19.401 [2024-07-21 01:33:04.621713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:19.401 [2024-07-21 01:33:04.621732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.401 [2024-07-21 01:33:04.621745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:19.401 [2024-07-21 01:33:04.621756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:19.401 [2024-07-21 01:33:04.621767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:19.401 [2024-07-21 01:33:04.621782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:19.401 [2024-07-21 01:33:04.621793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:19.401 [2024-07-21 01:33:04.621804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:19.401 [2024-07-21 01:33:04.621815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:19.401 [2024-07-21 01:33:04.621847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:19.401 [2024-07-21 01:33:04.621858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:19.401 [2024-07-21 01:33:04.621869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:19.402 [2024-07-21 01:33:04.621924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:19.402 [2024-07-21 01:33:04.621940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:19.402 [2024-07-21 01:33:04.621961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:19.402 [2024-07-21 01:33:04.621973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:19.402 [2024-07-21 01:33:04.621987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:19.402 [2024-07-21 01:33:04.621998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.622016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:19.402 [2024-07-21 01:33:04.622034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:23:19.402 [2024-07-21 01:33:04.622044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.653317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.653364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.402 [2024-07-21 01:33:04.653381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.246 ms 00:23:19.402 [2024-07-21 01:33:04.653395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.653496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.653511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.402 [2024-07-21 01:33:04.653535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:19.402 [2024-07-21 01:33:04.653548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.670052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.670088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.402 [2024-07-21 01:33:04.670101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.460 ms 00:23:19.402 [2024-07-21 01:33:04.670111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.670155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.670166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.402 [2024-07-21 01:33:04.670177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.402 [2024-07-21 01:33:04.670187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.671029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.671055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.402 [2024-07-21 01:33:04.671067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:23:19.402 [2024-07-21 01:33:04.671086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.671215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.671229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.402 [2024-07-21 01:33:04.671240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:19.402 [2024-07-21 01:33:04.671251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.680994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.681026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.402 [2024-07-21 01:33:04.681048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.734 ms 00:23:19.402 [2024-07-21 01:33:04.681059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.684859] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:19.402 [2024-07-21 01:33:04.684894] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:19.402 [2024-07-21 01:33:04.684910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.684932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:19.402 [2024-07-21 01:33:04.684943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:23:19.402 [2024-07-21 01:33:04.684954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.698158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.698194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:19.402 [2024-07-21 01:33:04.698207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.171 ms 00:23:19.402 [2024-07-21 01:33:04.698226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.700061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.700092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:19.402 [2024-07-21 01:33:04.700103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:23:19.402 [2024-07-21 01:33:04.700112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.701591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.701623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.402 [2024-07-21 01:33:04.701635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:23:19.402 [2024-07-21 01:33:04.701645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.402 [2024-07-21 01:33:04.701955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.402 [2024-07-21 01:33:04.701979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.402 [2024-07-21 01:33:04.702001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:23:19.402 [2024-07-21 01:33:04.702019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.661 [2024-07-21 01:33:04.733900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.661 [2024-07-21 01:33:04.733953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.661 [2024-07-21 01:33:04.733969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:23:19.662 [2024-07-21 01:33:04.733981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.740183] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:19.662 [2024-07-21 01:33:04.743575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.743604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.662 [2024-07-21 01:33:04.743621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.567 ms 00:23:19.662 [2024-07-21 01:33:04.743631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.743697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.743710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.662 [2024-07-21 01:33:04.743726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:19.662 [2024-07-21 01:33:04.743740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.743805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.743817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.662 [2024-07-21 01:33:04.743839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:19.662 [2024-07-21 01:33:04.743849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.743872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.743883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.662 [2024-07-21 01:33:04.743894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.662 [2024-07-21 01:33:04.743908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.743949] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.662 [2024-07-21 01:33:04.743961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.743971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.662 [2024-07-21 01:33:04.743982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:19.662 [2024-07-21 01:33:04.743992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.748555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.748588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.662 [2024-07-21 01:33:04.748601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:23:19.662 [2024-07-21 01:33:04.748611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.748692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.662 [2024-07-21 01:33:04.748705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.662 [2024-07-21 01:33:04.748739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:19.662 [2024-07-21 01:33:04.748750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.662 [2024-07-21 01:33:04.750256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.277 ms, result 0 00:24:03.326  Copying: 23/1024 [MB] (23 MBps) Copying: 45/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (24 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (25 MBps) Copying: 213/1024 [MB] (24 MBps) Copying: 238/1024 [MB] (24 MBps) Copying: 261/1024 [MB] (23 MBps) Copying: 285/1024 [MB] (23 MBps) Copying: 308/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (23 MBps) Copying: 356/1024 [MB] (24 MBps) Copying: 380/1024 [MB] (24 MBps) Copying: 403/1024 [MB] (23 MBps) Copying: 427/1024 [MB] (23 MBps) Copying: 450/1024 [MB] (23 MBps) Copying: 473/1024 [MB] (23 MBps) Copying: 497/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (22 MBps) Copying: 544/1024 [MB] (24 MBps) Copying: 567/1024 [MB] (23 MBps) Copying: 591/1024 [MB] (23 MBps) Copying: 614/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (23 MBps) Copying: 662/1024 [MB] (24 MBps) Copying: 686/1024 [MB] (24 MBps) Copying: 711/1024 [MB] (24 MBps) Copying: 735/1024 [MB] (24 MBps) Copying: 760/1024 [MB] (24 MBps) Copying: 786/1024 [MB] (25 MBps) Copying: 813/1024 [MB] (26 MBps) Copying: 839/1024 [MB] (25 MBps) Copying: 862/1024 [MB] (23 MBps) Copying: 884/1024 [MB] (22 MBps) Copying: 907/1024 [MB] (23 MBps) Copying: 930/1024 [MB] (22 MBps) Copying: 953/1024 [MB] (23 MBps) Copying: 977/1024 [MB] (23 MBps) Copying: 999/1024 [MB] (22 MBps) Copying: 1021/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-21 01:33:48.489756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.326 [2024-07-21 01:33:48.489838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:03.326 [2024-07-21 01:33:48.489858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:03.326 [2024-07-21 01:33:48.489876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.326 [2024-07-21 01:33:48.492137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:03.326 [2024-07-21 01:33:48.493873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.326 [2024-07-21 01:33:48.493910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:03.326 [2024-07-21 01:33:48.493924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:24:03.326 [2024-07-21 01:33:48.493935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.326 [2024-07-21 01:33:48.503038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.326 [2024-07-21 01:33:48.503080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:03.326 [2024-07-21 01:33:48.503094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.894 ms 00:24:03.326 [2024-07-21 01:33:48.503104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.326 [2024-07-21 01:33:48.526012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.326 [2024-07-21 01:33:48.526053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:03.326 [2024-07-21 01:33:48.526067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.919 ms 00:24:03.326 [2024-07-21 01:33:48.526079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.326 [2024-07-21 01:33:48.530927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.326 [2024-07-21 01:33:48.530960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:03.327 [2024-07-21 01:33:48.530972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:24:03.327 [2024-07-21 01:33:48.530992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.327 [2024-07-21 01:33:48.532548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.327 [2024-07-21 01:33:48.532583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:03.327 [2024-07-21 01:33:48.532594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:24:03.327 [2024-07-21 01:33:48.532604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.327 [2024-07-21 01:33:48.537325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.327 [2024-07-21 01:33:48.537360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:03.327 [2024-07-21 01:33:48.537373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:24:03.327 [2024-07-21 01:33:48.537383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.662808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.607 [2024-07-21 01:33:48.662865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:03.607 [2024-07-21 01:33:48.662889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.597 ms 00:24:03.607 [2024-07-21 01:33:48.662899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.665253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.607 [2024-07-21 01:33:48.665288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:03.607 [2024-07-21 01:33:48.665301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.335 ms 00:24:03.607 [2024-07-21 01:33:48.665311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.666913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.607 [2024-07-21 01:33:48.666944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:03.607 [2024-07-21 01:33:48.666956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:24:03.607 [2024-07-21 01:33:48.666978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.668170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.607 [2024-07-21 01:33:48.668200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:03.607 [2024-07-21 01:33:48.668211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:24:03.607 [2024-07-21 01:33:48.668220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.669471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.607 [2024-07-21 01:33:48.669504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:03.607 [2024-07-21 01:33:48.669515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:24:03.607 [2024-07-21 01:33:48.669524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.607 [2024-07-21 01:33:48.669550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:03.607 [2024-07-21 01:33:48.669584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102912 / 261120 wr_cnt: 1 state: open 00:24:03.607 [2024-07-21 01:33:48.669602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:03.607 [2024-07-21 01:33:48.669614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:03.607 [2024-07-21 01:33:48.669626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:03.607 [2024-07-21 01:33:48.669637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.669993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:03.608 [2024-07-21 01:33:48.670659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:03.609 [2024-07-21 01:33:48.670670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:03.609 [2024-07-21 01:33:48.670688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:03.609 [2024-07-21 01:33:48.670698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 404f075b-936e-43b2-b517-a017db49f2ec 00:24:03.609 [2024-07-21 01:33:48.670709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102912 00:24:03.609 [2024-07-21 01:33:48.670719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103872 00:24:03.609 [2024-07-21 01:33:48.670729] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102912 00:24:03.609 [2024-07-21 01:33:48.670739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:03.609 [2024-07-21 01:33:48.670749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:03.609 [2024-07-21 01:33:48.670759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:03.609 [2024-07-21 01:33:48.670768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:03.609 [2024-07-21 01:33:48.670777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:03.609 [2024-07-21 01:33:48.670786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:03.609 [2024-07-21 01:33:48.670796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.609 [2024-07-21 01:33:48.670816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:03.609 [2024-07-21 01:33:48.670830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:24:03.609 [2024-07-21 01:33:48.670850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.673447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.609 [2024-07-21 01:33:48.673471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:03.609 [2024-07-21 01:33:48.673483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:24:03.609 [2024-07-21 01:33:48.673507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.673685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.609 [2024-07-21 01:33:48.673701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:03.609 [2024-07-21 01:33:48.673713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:03.609 [2024-07-21 01:33:48.673723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.682668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.682695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.609 [2024-07-21 01:33:48.682719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.682731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.682784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.682795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.609 [2024-07-21 01:33:48.682806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.682816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.682871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.682884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.609 [2024-07-21 01:33:48.682896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.682906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.682926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.682938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.609 [2024-07-21 01:33:48.682948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.682958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.703442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.703483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.609 [2024-07-21 01:33:48.703496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.703507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.609 [2024-07-21 01:33:48.717173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.609 [2024-07-21 01:33:48.717271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.609 [2024-07-21 01:33:48.717351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.609 [2024-07-21 01:33:48.717495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.609 [2024-07-21 01:33:48.717574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.609 [2024-07-21 01:33:48.717659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.609 [2024-07-21 01:33:48.717747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.609 [2024-07-21 01:33:48.717764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.609 [2024-07-21 01:33:48.717774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.609 [2024-07-21 01:33:48.717976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 230.706 ms, result 0 00:24:04.176 00:24:04.176 00:24:04.435 01:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:06.340 01:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.340 [2024-07-21 01:33:51.276403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:06.340 [2024-07-21 01:33:51.276566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93633 ] 00:24:06.340 [2024-07-21 01:33:51.448453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.341 [2024-07-21 01:33:51.526663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.601 [2024-07-21 01:33:51.674423] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.601 [2024-07-21 01:33:51.674505] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.601 [2024-07-21 01:33:51.830176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.830234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:06.601 [2024-07-21 01:33:51.830250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:06.601 [2024-07-21 01:33:51.830277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.830334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.830354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.601 [2024-07-21 01:33:51.830365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:06.601 [2024-07-21 01:33:51.830388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.830410] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:06.601 [2024-07-21 01:33:51.830618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:06.601 [2024-07-21 01:33:51.830644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.830658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.601 [2024-07-21 01:33:51.830669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:24:06.601 [2024-07-21 01:33:51.830680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.833032] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:06.601 [2024-07-21 01:33:51.836499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.836544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:06.601 [2024-07-21 01:33:51.836561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.475 ms 00:24:06.601 [2024-07-21 01:33:51.836588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.836654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.836667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:06.601 [2024-07-21 01:33:51.836679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:06.601 [2024-07-21 01:33:51.836698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.848753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.848786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.601 [2024-07-21 01:33:51.848799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.025 ms 00:24:06.601 [2024-07-21 01:33:51.848825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.848934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.848950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.601 [2024-07-21 01:33:51.848961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:06.601 [2024-07-21 01:33:51.848975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.849036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.849061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:06.601 [2024-07-21 01:33:51.849075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:06.601 [2024-07-21 01:33:51.849092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.849117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:06.601 [2024-07-21 01:33:51.851740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.851767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.601 [2024-07-21 01:33:51.851778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:24:06.601 [2024-07-21 01:33:51.851789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.851839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.851856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:06.601 [2024-07-21 01:33:51.851870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:06.601 [2024-07-21 01:33:51.851882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.851904] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:06.601 [2024-07-21 01:33:51.851931] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:06.601 [2024-07-21 01:33:51.851979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:06.601 [2024-07-21 01:33:51.852004] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:06.601 [2024-07-21 01:33:51.852088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:06.601 [2024-07-21 01:33:51.852111] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:06.601 [2024-07-21 01:33:51.852124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:06.601 [2024-07-21 01:33:51.852137] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852155] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852166] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:06.601 [2024-07-21 01:33:51.852183] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:06.601 [2024-07-21 01:33:51.852193] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:06.601 [2024-07-21 01:33:51.852203] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:06.601 [2024-07-21 01:33:51.852213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.852223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:06.601 [2024-07-21 01:33:51.852238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:24:06.601 [2024-07-21 01:33:51.852251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.852320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.601 [2024-07-21 01:33:51.852330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:06.601 [2024-07-21 01:33:51.852341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:06.601 [2024-07-21 01:33:51.852351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.601 [2024-07-21 01:33:51.852427] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:06.601 [2024-07-21 01:33:51.852443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:06.601 [2024-07-21 01:33:51.852455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:06.601 [2024-07-21 01:33:51.852488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:06.601 [2024-07-21 01:33:51.852517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.601 [2024-07-21 01:33:51.852541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:06.601 [2024-07-21 01:33:51.852552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:06.601 [2024-07-21 01:33:51.852570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.601 [2024-07-21 01:33:51.852578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:06.601 [2024-07-21 01:33:51.852587] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:06.601 [2024-07-21 01:33:51.852596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:06.601 [2024-07-21 01:33:51.852614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:06.601 [2024-07-21 01:33:51.852641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:06.601 [2024-07-21 01:33:51.852668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:06.601 [2024-07-21 01:33:51.852705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.601 [2024-07-21 01:33:51.852739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:06.601 [2024-07-21 01:33:51.852749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:06.601 [2024-07-21 01:33:51.852758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.602 [2024-07-21 01:33:51.852767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:06.602 [2024-07-21 01:33:51.852776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:06.602 [2024-07-21 01:33:51.852785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.602 [2024-07-21 01:33:51.852795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:06.602 [2024-07-21 01:33:51.852804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:06.602 [2024-07-21 01:33:51.852814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.602 [2024-07-21 01:33:51.852824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:06.602 [2024-07-21 01:33:51.852833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:06.602 [2024-07-21 01:33:51.852854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.602 [2024-07-21 01:33:51.852864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:06.602 [2024-07-21 01:33:51.852875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:06.602 [2024-07-21 01:33:51.852888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.602 [2024-07-21 01:33:51.852898] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:06.602 [2024-07-21 01:33:51.852909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:06.602 [2024-07-21 01:33:51.852919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.602 [2024-07-21 01:33:51.852936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.602 [2024-07-21 01:33:51.852946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:06.602 [2024-07-21 01:33:51.852956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:06.602 [2024-07-21 01:33:51.852965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:06.602 [2024-07-21 01:33:51.852974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:06.602 [2024-07-21 01:33:51.852984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:06.602 [2024-07-21 01:33:51.852993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:06.602 [2024-07-21 01:33:51.853004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:06.602 [2024-07-21 01:33:51.853015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:06.602 [2024-07-21 01:33:51.853045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:06.602 [2024-07-21 01:33:51.853055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:06.602 [2024-07-21 01:33:51.853069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:06.602 [2024-07-21 01:33:51.853080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:06.602 [2024-07-21 01:33:51.853090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:06.602 [2024-07-21 01:33:51.853101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:06.602 [2024-07-21 01:33:51.853112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:06.602 [2024-07-21 01:33:51.853123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:06.602 [2024-07-21 01:33:51.853133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:06.602 [2024-07-21 01:33:51.853186] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:06.602 [2024-07-21 01:33:51.853197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:06.602 [2024-07-21 01:33:51.853219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:06.602 [2024-07-21 01:33:51.853229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:06.602 [2024-07-21 01:33:51.853242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:06.602 [2024-07-21 01:33:51.853255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.853266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:06.602 [2024-07-21 01:33:51.853280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:24:06.602 [2024-07-21 01:33:51.853290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.886554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.886667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.602 [2024-07-21 01:33:51.886714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.247 ms 00:24:06.602 [2024-07-21 01:33:51.886749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.887032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.887098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:06.602 [2024-07-21 01:33:51.887133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:24:06.602 [2024-07-21 01:33:51.887166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.908351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.908398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.602 [2024-07-21 01:33:51.908416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.010 ms 00:24:06.602 [2024-07-21 01:33:51.908431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.908486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.908504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.602 [2024-07-21 01:33:51.908531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:06.602 [2024-07-21 01:33:51.908551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.909377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.909405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.602 [2024-07-21 01:33:51.909421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:24:06.602 [2024-07-21 01:33:51.909435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.602 [2024-07-21 01:33:51.909602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.602 [2024-07-21 01:33:51.909627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.602 [2024-07-21 01:33:51.909643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:06.602 [2024-07-21 01:33:51.909658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.919730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.919764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.862 [2024-07-21 01:33:51.919778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.055 ms 00:24:06.862 [2024-07-21 01:33:51.919789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.923586] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:06.862 [2024-07-21 01:33:51.923623] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:06.862 [2024-07-21 01:33:51.923643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.923655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:06.862 [2024-07-21 01:33:51.923666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:24:06.862 [2024-07-21 01:33:51.923676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.936899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.936936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:06.862 [2024-07-21 01:33:51.936950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.191 ms 00:24:06.862 [2024-07-21 01:33:51.936976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.938880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.938912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:06.862 [2024-07-21 01:33:51.938924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:24:06.862 [2024-07-21 01:33:51.938949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.940411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.940444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:06.862 [2024-07-21 01:33:51.940455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:24:06.862 [2024-07-21 01:33:51.940465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.940774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.940800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:06.862 [2024-07-21 01:33:51.940812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:24:06.862 [2024-07-21 01:33:51.940839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.971613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.971668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:06.862 [2024-07-21 01:33:51.971685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.800 ms 00:24:06.862 [2024-07-21 01:33:51.971713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.977932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:06.862 [2024-07-21 01:33:51.981161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.981191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:06.862 [2024-07-21 01:33:51.981210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.419 ms 00:24:06.862 [2024-07-21 01:33:51.981228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.981308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.981321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:06.862 [2024-07-21 01:33:51.981333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:06.862 [2024-07-21 01:33:51.981343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.983535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.983580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:06.862 [2024-07-21 01:33:51.983592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:24:06.862 [2024-07-21 01:33:51.983617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.983650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.983661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:06.862 [2024-07-21 01:33:51.983672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:06.862 [2024-07-21 01:33:51.983682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.983740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:06.862 [2024-07-21 01:33:51.983754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.983764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:06.862 [2024-07-21 01:33:51.983779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:06.862 [2024-07-21 01:33:51.983789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.988571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.988618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:06.862 [2024-07-21 01:33:51.988647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:24:06.862 [2024-07-21 01:33:51.988658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.988751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.862 [2024-07-21 01:33:51.988765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:06.862 [2024-07-21 01:33:51.988777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:06.862 [2024-07-21 01:33:51.988792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.862 [2024-07-21 01:33:51.993792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 162.783 ms, result 0 00:24:40.484  Copying: 1408/1048576 [kB] (1408 kBps) Copying: 9732/1048576 [kB] (8324 kBps) Copying: 41/1024 [MB] (32 MBps) Copying: 73/1024 [MB] (32 MBps) Copying: 105/1024 [MB] (31 MBps) Copying: 137/1024 [MB] (31 MBps) Copying: 169/1024 [MB] (31 MBps) Copying: 200/1024 [MB] (31 MBps) Copying: 232/1024 [MB] (32 MBps) Copying: 264/1024 [MB] (31 MBps) Copying: 296/1024 [MB] (31 MBps) Copying: 328/1024 [MB] (32 MBps) Copying: 359/1024 [MB] (30 MBps) Copying: 391/1024 [MB] (31 MBps) Copying: 425/1024 [MB] (34 MBps) Copying: 459/1024 [MB] (34 MBps) Copying: 492/1024 [MB] (33 MBps) Copying: 526/1024 [MB] (33 MBps) Copying: 559/1024 [MB] (33 MBps) Copying: 592/1024 [MB] (33 MBps) Copying: 626/1024 [MB] (34 MBps) Copying: 660/1024 [MB] (33 MBps) Copying: 693/1024 [MB] (33 MBps) Copying: 727/1024 [MB] (33 MBps) Copying: 759/1024 [MB] (32 MBps) Copying: 792/1024 [MB] (32 MBps) Copying: 825/1024 [MB] (32 MBps) Copying: 859/1024 [MB] (34 MBps) Copying: 893/1024 [MB] (33 MBps) Copying: 925/1024 [MB] (32 MBps) Copying: 958/1024 [MB] (32 MBps) Copying: 991/1024 [MB] (32 MBps) Copying: 1023/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-21 01:34:25.773940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.484 [2024-07-21 01:34:25.774070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.484 [2024-07-21 01:34:25.774104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:40.484 [2024-07-21 01:34:25.774125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.484 [2024-07-21 01:34:25.774207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.484 [2024-07-21 01:34:25.775650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.484 [2024-07-21 01:34:25.775689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.484 [2024-07-21 01:34:25.775712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:24:40.484 [2024-07-21 01:34:25.775750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.484 [2024-07-21 01:34:25.776281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.484 [2024-07-21 01:34:25.776307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.484 [2024-07-21 01:34:25.776329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:24:40.484 [2024-07-21 01:34:25.776348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.484 [2024-07-21 01:34:25.791085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.484 [2024-07-21 01:34:25.791139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:40.484 [2024-07-21 01:34:25.791165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.727 ms 00:24:40.484 [2024-07-21 01:34:25.791176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.796260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.796295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:40.745 [2024-07-21 01:34:25.796308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.055 ms 00:24:40.745 [2024-07-21 01:34:25.796319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.798163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.798206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:40.745 [2024-07-21 01:34:25.798219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:24:40.745 [2024-07-21 01:34:25.798229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.802655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.802706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:40.745 [2024-07-21 01:34:25.802720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:24:40.745 [2024-07-21 01:34:25.802731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.807345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.807386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:40.745 [2024-07-21 01:34:25.807399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.585 ms 00:24:40.745 [2024-07-21 01:34:25.807410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.809542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.809577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:40.745 [2024-07-21 01:34:25.809589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:24:40.745 [2024-07-21 01:34:25.809599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.811272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.811305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:40.745 [2024-07-21 01:34:25.811316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:24:40.745 [2024-07-21 01:34:25.811326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.812498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.812531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:40.745 [2024-07-21 01:34:25.812543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:24:40.745 [2024-07-21 01:34:25.812553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.813875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-07-21 01:34:25.814004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:40.745 [2024-07-21 01:34:25.814085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:24:40.745 [2024-07-21 01:34:25.814121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-07-21 01:34:25.814174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:40.745 [2024-07-21 01:34:25.814338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:40.745 [2024-07-21 01:34:25.814365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:24:40.745 [2024-07-21 01:34:25.814377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.814995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:40.745 [2024-07-21 01:34:25.815085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:40.746 [2024-07-21 01:34:25.815689] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:40.746 [2024-07-21 01:34:25.815701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 404f075b-936e-43b2-b517-a017db49f2ec 00:24:40.746 [2024-07-21 01:34:25.815713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:24:40.746 [2024-07-21 01:34:25.815723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 164288 00:24:40.746 [2024-07-21 01:34:25.815734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 162304 00:24:40.746 [2024-07-21 01:34:25.815744] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0122 00:24:40.746 [2024-07-21 01:34:25.815755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:40.746 [2024-07-21 01:34:25.815766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:40.746 [2024-07-21 01:34:25.815777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:40.746 [2024-07-21 01:34:25.815786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:40.746 [2024-07-21 01:34:25.815796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:40.746 [2024-07-21 01:34:25.815806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.746 [2024-07-21 01:34:25.815837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:40.746 [2024-07-21 01:34:25.815849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:24:40.746 [2024-07-21 01:34:25.815864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.746 [2024-07-21 01:34:25.818905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.746 [2024-07-21 01:34:25.819037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:40.746 [2024-07-21 01:34:25.819132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:24:40.746 [2024-07-21 01:34:25.819171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.746 [2024-07-21 01:34:25.819374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.746 [2024-07-21 01:34:25.819468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:40.746 [2024-07-21 01:34:25.819525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:24:40.746 [2024-07-21 01:34:25.819554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.746 [2024-07-21 01:34:25.828684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.746 [2024-07-21 01:34:25.828844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.746 [2024-07-21 01:34:25.828938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.746 [2024-07-21 01:34:25.828975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.746 [2024-07-21 01:34:25.829045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.746 [2024-07-21 01:34:25.829086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.746 [2024-07-21 01:34:25.829116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.746 [2024-07-21 01:34:25.829195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.829283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.829330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.747 [2024-07-21 01:34:25.829362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.829438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.829504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.829536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.747 [2024-07-21 01:34:25.829574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.829603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.851645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.851908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.747 [2024-07-21 01:34:25.851986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.852023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.865535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.865718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.747 [2024-07-21 01:34:25.865885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.865925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.866020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.866107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.747 [2024-07-21 01:34:25.866144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.866174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.866284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.866366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.747 [2024-07-21 01:34:25.866429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.866481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.866627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.866688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.747 [2024-07-21 01:34:25.866745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.866775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.866882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.866925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:40.747 [2024-07-21 01:34:25.866955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.867055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.867140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.867172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.747 [2024-07-21 01:34:25.867202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.867233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.867305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.747 [2024-07-21 01:34:25.867402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.747 [2024-07-21 01:34:25.867447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.747 [2024-07-21 01:34:25.867465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.747 [2024-07-21 01:34:25.867638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 93.829 ms, result 0 00:24:41.006 00:24:41.006 00:24:41.006 01:34:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:42.909 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:42.909 01:34:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:42.909 [2024-07-21 01:34:28.075978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:42.910 [2024-07-21 01:34:28.076107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94008 ] 00:24:43.167 [2024-07-21 01:34:28.248330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.167 [2024-07-21 01:34:28.315528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.168 [2024-07-21 01:34:28.463069] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.168 [2024-07-21 01:34:28.463162] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:43.427 [2024-07-21 01:34:28.620100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.620161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.427 [2024-07-21 01:34:28.620186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.427 [2024-07-21 01:34:28.620205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.620266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.620279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.427 [2024-07-21 01:34:28.620290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:43.427 [2024-07-21 01:34:28.620304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.620334] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.427 [2024-07-21 01:34:28.620688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.427 [2024-07-21 01:34:28.620722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.620737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.427 [2024-07-21 01:34:28.620748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:24:43.427 [2024-07-21 01:34:28.620765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.623148] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:43.427 [2024-07-21 01:34:28.626626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.626662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:43.427 [2024-07-21 01:34:28.626682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:24:43.427 [2024-07-21 01:34:28.626693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.626765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.626781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:43.427 [2024-07-21 01:34:28.626800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:43.427 [2024-07-21 01:34:28.626810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.638824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.638863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.427 [2024-07-21 01:34:28.638876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.966 ms 00:24:43.427 [2024-07-21 01:34:28.638902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.639009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.639023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.427 [2024-07-21 01:34:28.639034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:43.427 [2024-07-21 01:34:28.639049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.639112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.639130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.427 [2024-07-21 01:34:28.639144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:43.427 [2024-07-21 01:34:28.639154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.639191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.427 [2024-07-21 01:34:28.641796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.641836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.427 [2024-07-21 01:34:28.641859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:24:43.427 [2024-07-21 01:34:28.641870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.641906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.641918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.427 [2024-07-21 01:34:28.641932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:43.427 [2024-07-21 01:34:28.641946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.641972] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:43.427 [2024-07-21 01:34:28.641999] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:43.427 [2024-07-21 01:34:28.642037] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:43.427 [2024-07-21 01:34:28.642056] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:43.427 [2024-07-21 01:34:28.642164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.427 [2024-07-21 01:34:28.642187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.427 [2024-07-21 01:34:28.642201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:43.427 [2024-07-21 01:34:28.642214] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642227] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642239] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.427 [2024-07-21 01:34:28.642250] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.427 [2024-07-21 01:34:28.642260] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.427 [2024-07-21 01:34:28.642278] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.427 [2024-07-21 01:34:28.642289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.642299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.427 [2024-07-21 01:34:28.642314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:24:43.427 [2024-07-21 01:34:28.642327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.642398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.427 [2024-07-21 01:34:28.642409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.427 [2024-07-21 01:34:28.642419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:43.427 [2024-07-21 01:34:28.642429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.427 [2024-07-21 01:34:28.642516] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.427 [2024-07-21 01:34:28.642529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.427 [2024-07-21 01:34:28.642540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.427 [2024-07-21 01:34:28.642573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.427 [2024-07-21 01:34:28.642602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.427 [2024-07-21 01:34:28.642625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.427 [2024-07-21 01:34:28.642637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.427 [2024-07-21 01:34:28.642657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.427 [2024-07-21 01:34:28.642666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.427 [2024-07-21 01:34:28.642676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.427 [2024-07-21 01:34:28.642685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.427 [2024-07-21 01:34:28.642704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.427 [2024-07-21 01:34:28.642733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.427 [2024-07-21 01:34:28.642762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.427 [2024-07-21 01:34:28.642771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.427 [2024-07-21 01:34:28.642780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.427 [2024-07-21 01:34:28.642798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.428 [2024-07-21 01:34:28.642808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.428 [2024-07-21 01:34:28.642817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.428 [2024-07-21 01:34:28.642839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.428 [2024-07-21 01:34:28.642849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.428 [2024-07-21 01:34:28.642858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.428 [2024-07-21 01:34:28.642868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.428 [2024-07-21 01:34:28.642877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.428 [2024-07-21 01:34:28.642887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.428 [2024-07-21 01:34:28.642896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.428 [2024-07-21 01:34:28.642906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.428 [2024-07-21 01:34:28.642915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.428 [2024-07-21 01:34:28.642924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.428 [2024-07-21 01:34:28.642934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.428 [2024-07-21 01:34:28.642943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.428 [2024-07-21 01:34:28.642953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.428 [2024-07-21 01:34:28.642966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.428 [2024-07-21 01:34:28.642975] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.428 [2024-07-21 01:34:28.642986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.428 [2024-07-21 01:34:28.642995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.428 [2024-07-21 01:34:28.643005] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.428 [2024-07-21 01:34:28.643015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.428 [2024-07-21 01:34:28.643024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.428 [2024-07-21 01:34:28.643034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.428 [2024-07-21 01:34:28.643045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.428 [2024-07-21 01:34:28.643054] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.428 [2024-07-21 01:34:28.643063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.428 [2024-07-21 01:34:28.643075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.428 [2024-07-21 01:34:28.643087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.428 [2024-07-21 01:34:28.643110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.428 [2024-07-21 01:34:28.643121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.428 [2024-07-21 01:34:28.643135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.428 [2024-07-21 01:34:28.643146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.428 [2024-07-21 01:34:28.643156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.428 [2024-07-21 01:34:28.643167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.428 [2024-07-21 01:34:28.643177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.428 [2024-07-21 01:34:28.643188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.428 [2024-07-21 01:34:28.643199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.428 [2024-07-21 01:34:28.643252] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.428 [2024-07-21 01:34:28.643263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.428 [2024-07-21 01:34:28.643292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.428 [2024-07-21 01:34:28.643302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.428 [2024-07-21 01:34:28.643316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.428 [2024-07-21 01:34:28.643328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.643339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.428 [2024-07-21 01:34:28.643360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:24:43.428 [2024-07-21 01:34:28.643370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.675208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.675261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.428 [2024-07-21 01:34:28.675280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.823 ms 00:24:43.428 [2024-07-21 01:34:28.675294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.675403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.675418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:43.428 [2024-07-21 01:34:28.675432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:43.428 [2024-07-21 01:34:28.675445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.691905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.691943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.428 [2024-07-21 01:34:28.691958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.403 ms 00:24:43.428 [2024-07-21 01:34:28.691977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.692023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.692035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.428 [2024-07-21 01:34:28.692046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:43.428 [2024-07-21 01:34:28.692062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.692879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.692899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.428 [2024-07-21 01:34:28.692919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:24:43.428 [2024-07-21 01:34:28.692929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.693060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.693073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.428 [2024-07-21 01:34:28.693091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:43.428 [2024-07-21 01:34:28.693101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.702862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.702894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.428 [2024-07-21 01:34:28.702908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.751 ms 00:24:43.428 [2024-07-21 01:34:28.702918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.706692] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:43.428 [2024-07-21 01:34:28.706735] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:43.428 [2024-07-21 01:34:28.706755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.706766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:43.428 [2024-07-21 01:34:28.706778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.721 ms 00:24:43.428 [2024-07-21 01:34:28.706788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.720436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.720472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:43.428 [2024-07-21 01:34:28.720487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.602 ms 00:24:43.428 [2024-07-21 01:34:28.720498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.722556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.722590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:43.428 [2024-07-21 01:34:28.722603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:24:43.428 [2024-07-21 01:34:28.722613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.724170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.724202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:43.428 [2024-07-21 01:34:28.724214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:24:43.428 [2024-07-21 01:34:28.724224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.428 [2024-07-21 01:34:28.724537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.428 [2024-07-21 01:34:28.724556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:43.428 [2024-07-21 01:34:28.724567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:24:43.428 [2024-07-21 01:34:28.724582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.687 [2024-07-21 01:34:28.755472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.687 [2024-07-21 01:34:28.755543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:43.687 [2024-07-21 01:34:28.755562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.910 ms 00:24:43.687 [2024-07-21 01:34:28.755574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.687 [2024-07-21 01:34:28.761952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:43.687 [2024-07-21 01:34:28.765860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.687 [2024-07-21 01:34:28.765889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:43.687 [2024-07-21 01:34:28.765909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.256 ms 00:24:43.687 [2024-07-21 01:34:28.765920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.687 [2024-07-21 01:34:28.766008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.687 [2024-07-21 01:34:28.766022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:43.687 [2024-07-21 01:34:28.766035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.687 [2024-07-21 01:34:28.766045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.687 [2024-07-21 01:34:28.767381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.687 [2024-07-21 01:34:28.767421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:43.687 [2024-07-21 01:34:28.767434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:24:43.687 [2024-07-21 01:34:28.767445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.687 [2024-07-21 01:34:28.767474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.687 [2024-07-21 01:34:28.767486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:43.687 [2024-07-21 01:34:28.767505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.687 [2024-07-21 01:34:28.767519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.688 [2024-07-21 01:34:28.767561] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:43.688 [2024-07-21 01:34:28.767574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.688 [2024-07-21 01:34:28.767585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:43.688 [2024-07-21 01:34:28.767599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:43.688 [2024-07-21 01:34:28.767609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.688 [2024-07-21 01:34:28.772432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.688 [2024-07-21 01:34:28.772469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:43.688 [2024-07-21 01:34:28.772482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:24:43.688 [2024-07-21 01:34:28.772502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.688 [2024-07-21 01:34:28.772580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.688 [2024-07-21 01:34:28.772594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:43.688 [2024-07-21 01:34:28.772609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:43.688 [2024-07-21 01:34:28.772631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.688 [2024-07-21 01:34:28.774021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.689 ms, result 0 00:25:25.129  Copying: 27/1024 [MB] (27 MBps) Copying: 52/1024 [MB] (25 MBps) Copying: 77/1024 [MB] (24 MBps) Copying: 101/1024 [MB] (24 MBps) Copying: 125/1024 [MB] (24 MBps) Copying: 149/1024 [MB] (23 MBps) Copying: 172/1024 [MB] (23 MBps) Copying: 196/1024 [MB] (23 MBps) Copying: 220/1024 [MB] (24 MBps) Copying: 244/1024 [MB] (24 MBps) Copying: 269/1024 [MB] (24 MBps) Copying: 294/1024 [MB] (24 MBps) Copying: 319/1024 [MB] (25 MBps) Copying: 345/1024 [MB] (25 MBps) Copying: 370/1024 [MB] (24 MBps) Copying: 394/1024 [MB] (23 MBps) Copying: 418/1024 [MB] (24 MBps) Copying: 444/1024 [MB] (26 MBps) Copying: 470/1024 [MB] (25 MBps) Copying: 496/1024 [MB] (25 MBps) Copying: 522/1024 [MB] (26 MBps) Copying: 547/1024 [MB] (25 MBps) Copying: 572/1024 [MB] (25 MBps) Copying: 598/1024 [MB] (26 MBps) Copying: 624/1024 [MB] (25 MBps) Copying: 649/1024 [MB] (25 MBps) Copying: 674/1024 [MB] (25 MBps) Copying: 699/1024 [MB] (24 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 747/1024 [MB] (24 MBps) Copying: 771/1024 [MB] (23 MBps) Copying: 795/1024 [MB] (24 MBps) Copying: 820/1024 [MB] (24 MBps) Copying: 845/1024 [MB] (25 MBps) Copying: 870/1024 [MB] (25 MBps) Copying: 894/1024 [MB] (24 MBps) Copying: 919/1024 [MB] (25 MBps) Copying: 945/1024 [MB] (25 MBps) Copying: 969/1024 [MB] (24 MBps) Copying: 994/1024 [MB] (24 MBps) Copying: 1018/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-21 01:35:10.334381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.334792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:25.129 [2024-07-21 01:35:10.334858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:25.129 [2024-07-21 01:35:10.334896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.334946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.129 [2024-07-21 01:35:10.336259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.336294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:25.129 [2024-07-21 01:35:10.336316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:25:25.129 [2024-07-21 01:35:10.336336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.336727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.336751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:25.129 [2024-07-21 01:35:10.336781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:25:25.129 [2024-07-21 01:35:10.336801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.342116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.342152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:25.129 [2024-07-21 01:35:10.342168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.276 ms 00:25:25.129 [2024-07-21 01:35:10.342183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.349294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.349348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:25.129 [2024-07-21 01:35:10.349365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.097 ms 00:25:25.129 [2024-07-21 01:35:10.349385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.351046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.351085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:25.129 [2024-07-21 01:35:10.351098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:25:25.129 [2024-07-21 01:35:10.351108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.355991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.356029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:25.129 [2024-07-21 01:35:10.356041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.858 ms 00:25:25.129 [2024-07-21 01:35:10.356051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.361389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.361427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:25.129 [2024-07-21 01:35:10.361440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.313 ms 00:25:25.129 [2024-07-21 01:35:10.361459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.363795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.363840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:25.129 [2024-07-21 01:35:10.363853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.322 ms 00:25:25.129 [2024-07-21 01:35:10.363862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.365571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.365605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:25.129 [2024-07-21 01:35:10.365616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:25:25.129 [2024-07-21 01:35:10.365626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.129 [2024-07-21 01:35:10.366982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.129 [2024-07-21 01:35:10.367013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:25.129 [2024-07-21 01:35:10.367025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:25:25.129 [2024-07-21 01:35:10.367049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.130 [2024-07-21 01:35:10.368237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.130 [2024-07-21 01:35:10.368269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:25.130 [2024-07-21 01:35:10.368280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:25:25.130 [2024-07-21 01:35:10.368289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.130 [2024-07-21 01:35:10.368315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:25.130 [2024-07-21 01:35:10.368331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:25.130 [2024-07-21 01:35:10.368345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 4096 / 261120 wr_cnt: 1 state: open 00:25:25.130 [2024-07-21 01:35:10.368356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.368994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:25.130 [2024-07-21 01:35:10.369223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:25.131 [2024-07-21 01:35:10.369451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:25.131 [2024-07-21 01:35:10.369461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 404f075b-936e-43b2-b517-a017db49f2ec 00:25:25.131 [2024-07-21 01:35:10.369473] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 265216 00:25:25.131 [2024-07-21 01:35:10.369482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:25.131 [2024-07-21 01:35:10.369492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:25.131 [2024-07-21 01:35:10.369503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:25.131 [2024-07-21 01:35:10.369512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:25.131 [2024-07-21 01:35:10.369523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:25.131 [2024-07-21 01:35:10.369540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:25.131 [2024-07-21 01:35:10.369550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:25.131 [2024-07-21 01:35:10.369559] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:25.131 [2024-07-21 01:35:10.369569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.131 [2024-07-21 01:35:10.369579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:25.131 [2024-07-21 01:35:10.369604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:25:25.131 [2024-07-21 01:35:10.369614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.372227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.131 [2024-07-21 01:35:10.372247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:25.131 [2024-07-21 01:35:10.372259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:25:25.131 [2024-07-21 01:35:10.372269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.372467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.131 [2024-07-21 01:35:10.372485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:25.131 [2024-07-21 01:35:10.372495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:25:25.131 [2024-07-21 01:35:10.372514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.381421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.381457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.131 [2024-07-21 01:35:10.381482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.381493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.381549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.381562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.131 [2024-07-21 01:35:10.381572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.381582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.381644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.381664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.131 [2024-07-21 01:35:10.381676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.381691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.381714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.381732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.131 [2024-07-21 01:35:10.381743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.381753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.401776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.401815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.131 [2024-07-21 01:35:10.401842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.401861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.131 [2024-07-21 01:35:10.415073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.131 [2024-07-21 01:35:10.415164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.131 [2024-07-21 01:35:10.415241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.131 [2024-07-21 01:35:10.415355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:25.131 [2024-07-21 01:35:10.415439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.131 [2024-07-21 01:35:10.415516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.131 [2024-07-21 01:35:10.415592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.131 [2024-07-21 01:35:10.415602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.131 [2024-07-21 01:35:10.415612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.131 [2024-07-21 01:35:10.415861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.505 ms, result 0 00:25:25.698 00:25:25.698 00:25:25.698 01:35:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:27.619 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:27.619 Process with pid 92210 is not found 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92210 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 92210 ']' 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 92210 00:25:27.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92210) - No such process 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 92210 is not found' 00:25:27.619 01:35:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:27.878 Remove shared memory files 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:27.878 ************************************ 00:25:27.878 END TEST ftl_dirty_shutdown 00:25:27.878 ************************************ 00:25:27.878 00:25:27.878 real 3m33.928s 00:25:27.878 user 3m59.558s 00:25:27.878 sys 0m38.881s 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:27.878 01:35:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:27.878 01:35:13 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:27.878 01:35:13 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:27.878 01:35:13 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:27.878 01:35:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:27.878 ************************************ 00:25:27.878 START TEST ftl_upgrade_shutdown 00:25:27.878 ************************************ 00:25:27.878 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:28.136 * Looking for test storage... 00:25:28.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94532 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94532 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94532 ']' 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:28.137 01:35:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:28.394 [2024-07-21 01:35:13.462807] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:28.394 [2024-07-21 01:35:13.462941] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94532 ] 00:25:28.394 [2024-07-21 01:35:13.634054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.395 [2024-07-21 01:35:13.702044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:29.000 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:25:29.257 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:29.515 { 00:25:29.515 "name": "basen1", 00:25:29.515 "aliases": [ 00:25:29.515 "692272bf-0f64-461b-85ae-c52f6d84d84c" 00:25:29.515 ], 00:25:29.515 "product_name": "NVMe disk", 00:25:29.515 "block_size": 4096, 00:25:29.515 "num_blocks": 1310720, 00:25:29.515 "uuid": "692272bf-0f64-461b-85ae-c52f6d84d84c", 00:25:29.515 "assigned_rate_limits": { 00:25:29.515 "rw_ios_per_sec": 0, 00:25:29.515 "rw_mbytes_per_sec": 0, 00:25:29.515 "r_mbytes_per_sec": 0, 00:25:29.515 "w_mbytes_per_sec": 0 00:25:29.515 }, 00:25:29.515 "claimed": true, 00:25:29.515 "claim_type": "read_many_write_one", 00:25:29.515 "zoned": false, 00:25:29.515 "supported_io_types": { 00:25:29.515 "read": true, 00:25:29.515 "write": true, 00:25:29.515 "unmap": true, 00:25:29.515 "write_zeroes": true, 00:25:29.515 "flush": true, 00:25:29.515 "reset": true, 00:25:29.515 "compare": true, 00:25:29.515 "compare_and_write": false, 00:25:29.515 "abort": true, 00:25:29.515 "nvme_admin": true, 00:25:29.515 "nvme_io": true 00:25:29.515 }, 00:25:29.515 "driver_specific": { 00:25:29.515 "nvme": [ 00:25:29.515 { 00:25:29.515 "pci_address": "0000:00:11.0", 00:25:29.515 "trid": { 00:25:29.515 "trtype": "PCIe", 00:25:29.515 "traddr": "0000:00:11.0" 00:25:29.515 }, 00:25:29.515 "ctrlr_data": { 00:25:29.515 "cntlid": 0, 00:25:29.515 "vendor_id": "0x1b36", 00:25:29.515 "model_number": "QEMU NVMe Ctrl", 00:25:29.515 "serial_number": "12341", 00:25:29.515 "firmware_revision": "8.0.0", 00:25:29.515 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:29.515 "oacs": { 00:25:29.515 "security": 0, 00:25:29.515 "format": 1, 00:25:29.515 "firmware": 0, 00:25:29.515 "ns_manage": 1 00:25:29.515 }, 00:25:29.515 "multi_ctrlr": false, 00:25:29.515 "ana_reporting": false 00:25:29.515 }, 00:25:29.515 "vs": { 00:25:29.515 "nvme_version": "1.4" 00:25:29.515 }, 00:25:29.515 "ns_data": { 00:25:29.515 "id": 1, 00:25:29.515 "can_share": false 00:25:29.515 } 00:25:29.515 } 00:25:29.515 ], 00:25:29.515 "mp_policy": "active_passive" 00:25:29.515 } 00:25:29.515 } 00:25:29.515 ]' 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:29.515 01:35:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:29.773 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=efdab90f-90f7-4b36-8b6c-7965da74df1b 00:25:29.773 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:29.773 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efdab90f-90f7-4b36-8b6c-7965da74df1b 00:25:30.030 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:30.287 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5068b46d-b59a-41ef-b6de-99d24de40756 00:25:30.287 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5068b46d-b59a-41ef-b6de-99d24de40756 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a36e7457-fcef-4343-a334-f65fb6f3e4b0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a36e7457-fcef-4343-a334-f65fb6f3e4b0 ]] 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a36e7457-fcef-4343-a334-f65fb6f3e4b0 5120 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a36e7457-fcef-4343-a334-f65fb6f3e4b0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a36e7457-fcef-4343-a334-f65fb6f3e4b0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=a36e7457-fcef-4343-a334-f65fb6f3e4b0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a36e7457-fcef-4343-a334-f65fb6f3e4b0 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:30.545 { 00:25:30.545 "name": "a36e7457-fcef-4343-a334-f65fb6f3e4b0", 00:25:30.545 "aliases": [ 00:25:30.545 "lvs/basen1p0" 00:25:30.545 ], 00:25:30.545 "product_name": "Logical Volume", 00:25:30.545 "block_size": 4096, 00:25:30.545 "num_blocks": 5242880, 00:25:30.545 "uuid": "a36e7457-fcef-4343-a334-f65fb6f3e4b0", 00:25:30.545 "assigned_rate_limits": { 00:25:30.545 "rw_ios_per_sec": 0, 00:25:30.545 "rw_mbytes_per_sec": 0, 00:25:30.545 "r_mbytes_per_sec": 0, 00:25:30.545 "w_mbytes_per_sec": 0 00:25:30.545 }, 00:25:30.545 "claimed": false, 00:25:30.545 "zoned": false, 00:25:30.545 "supported_io_types": { 00:25:30.545 "read": true, 00:25:30.545 "write": true, 00:25:30.545 "unmap": true, 00:25:30.545 "write_zeroes": true, 00:25:30.545 "flush": false, 00:25:30.545 "reset": true, 00:25:30.545 "compare": false, 00:25:30.545 "compare_and_write": false, 00:25:30.545 "abort": false, 00:25:30.545 "nvme_admin": false, 00:25:30.545 "nvme_io": false 00:25:30.545 }, 00:25:30.545 "driver_specific": { 00:25:30.545 "lvol": { 00:25:30.545 "lvol_store_uuid": "5068b46d-b59a-41ef-b6de-99d24de40756", 00:25:30.545 "base_bdev": "basen1", 00:25:30.545 "thin_provision": true, 00:25:30.545 "num_allocated_clusters": 0, 00:25:30.545 "snapshot": false, 00:25:30.545 "clone": false, 00:25:30.545 "esnap_clone": false 00:25:30.545 } 00:25:30.545 } 00:25:30.545 } 00:25:30.545 ]' 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:25:30.545 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:30.802 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:31.060 01:35:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a36e7457-fcef-4343-a334-f65fb6f3e4b0 -c cachen1p0 --l2p_dram_limit 2 00:25:31.317 [2024-07-21 01:35:16.489015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.489079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:31.317 [2024-07-21 01:35:16.489099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:31.317 [2024-07-21 01:35:16.489111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.489186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.489200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:31.317 [2024-07-21 01:35:16.489213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:31.317 [2024-07-21 01:35:16.489227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.489261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:31.317 [2024-07-21 01:35:16.489632] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:31.317 [2024-07-21 01:35:16.489668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.489680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:31.317 [2024-07-21 01:35:16.489694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:25:31.317 [2024-07-21 01:35:16.489704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.489787] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID cb75ed14-a64f-403e-9d6e-3604ef5cc910 00:25:31.317 [2024-07-21 01:35:16.492088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.492117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:31.317 [2024-07-21 01:35:16.492132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:31.317 [2024-07-21 01:35:16.492145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.505344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.505392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:31.317 [2024-07-21 01:35:16.505405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.169 ms 00:25:31.317 [2024-07-21 01:35:16.505425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.505488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.505511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:31.317 [2024-07-21 01:35:16.505522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:25:31.317 [2024-07-21 01:35:16.505535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.505600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.505616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:31.317 [2024-07-21 01:35:16.505626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:31.317 [2024-07-21 01:35:16.505639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.505665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:31.317 [2024-07-21 01:35:16.508380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.508411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:31.317 [2024-07-21 01:35:16.508426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.725 ms 00:25:31.317 [2024-07-21 01:35:16.508435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.508469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.508480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:31.317 [2024-07-21 01:35:16.508493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:31.317 [2024-07-21 01:35:16.508502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.508527] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:31.317 [2024-07-21 01:35:16.508669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:31.317 [2024-07-21 01:35:16.508694] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:31.317 [2024-07-21 01:35:16.508718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:31.317 [2024-07-21 01:35:16.508758] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:31.317 [2024-07-21 01:35:16.508770] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:31.317 [2024-07-21 01:35:16.508785] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:31.317 [2024-07-21 01:35:16.508795] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:31.317 [2024-07-21 01:35:16.508811] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:31.317 [2024-07-21 01:35:16.508822] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:31.317 [2024-07-21 01:35:16.508835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.508859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:31.317 [2024-07-21 01:35:16.508873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:25:31.317 [2024-07-21 01:35:16.508883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.508960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.317 [2024-07-21 01:35:16.508972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:31.317 [2024-07-21 01:35:16.509001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:25:31.317 [2024-07-21 01:35:16.509011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.317 [2024-07-21 01:35:16.509109] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:31.317 [2024-07-21 01:35:16.509124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:31.317 [2024-07-21 01:35:16.509144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:31.317 [2024-07-21 01:35:16.509155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:31.317 [2024-07-21 01:35:16.509179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:31.317 [2024-07-21 01:35:16.509200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:31.317 [2024-07-21 01:35:16.509212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:31.317 [2024-07-21 01:35:16.509222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:31.317 [2024-07-21 01:35:16.509243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:31.317 [2024-07-21 01:35:16.509255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:31.317 [2024-07-21 01:35:16.509296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:31.317 [2024-07-21 01:35:16.509308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:31.317 [2024-07-21 01:35:16.509330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:31.317 [2024-07-21 01:35:16.509342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.317 [2024-07-21 01:35:16.509352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:31.317 [2024-07-21 01:35:16.509364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:31.317 [2024-07-21 01:35:16.509373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:31.317 [2024-07-21 01:35:16.509386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:31.317 [2024-07-21 01:35:16.509395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:31.317 [2024-07-21 01:35:16.509407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:31.317 [2024-07-21 01:35:16.509417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:31.317 [2024-07-21 01:35:16.509431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:31.317 [2024-07-21 01:35:16.509440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:31.317 [2024-07-21 01:35:16.509453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:31.317 [2024-07-21 01:35:16.509462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:31.317 [2024-07-21 01:35:16.509477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:31.318 [2024-07-21 01:35:16.509486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:31.318 [2024-07-21 01:35:16.509499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:31.318 [2024-07-21 01:35:16.509508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:31.318 [2024-07-21 01:35:16.509529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:31.318 [2024-07-21 01:35:16.509541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:31.318 [2024-07-21 01:35:16.509562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:31.318 [2024-07-21 01:35:16.509593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:31.318 [2024-07-21 01:35:16.509605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509614] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:31.318 [2024-07-21 01:35:16.509628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:31.318 [2024-07-21 01:35:16.509638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:31.318 [2024-07-21 01:35:16.509654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:31.318 [2024-07-21 01:35:16.509669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:31.318 [2024-07-21 01:35:16.509682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:31.318 [2024-07-21 01:35:16.509692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:31.318 [2024-07-21 01:35:16.509706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:31.318 [2024-07-21 01:35:16.509715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:31.318 [2024-07-21 01:35:16.509727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:31.318 [2024-07-21 01:35:16.509742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:31.318 [2024-07-21 01:35:16.509765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:31.318 [2024-07-21 01:35:16.509802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:31.318 [2024-07-21 01:35:16.509837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:31.318 [2024-07-21 01:35:16.509863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:31.318 [2024-07-21 01:35:16.509876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:31.318 [2024-07-21 01:35:16.509903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.509977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:31.318 [2024-07-21 01:35:16.509987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:31.318 [2024-07-21 01:35:16.510003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.510014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.318 [2024-07-21 01:35:16.510027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:31.318 [2024-07-21 01:35:16.510037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:31.318 [2024-07-21 01:35:16.510050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:31.318 [2024-07-21 01:35:16.510061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:31.318 [2024-07-21 01:35:16.510074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:31.318 [2024-07-21 01:35:16.510084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:25:31.318 [2024-07-21 01:35:16.510101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:31.318 [2024-07-21 01:35:16.510167] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:31.318 [2024-07-21 01:35:16.510196] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:35.499 [2024-07-21 01:35:20.217889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.217950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:35.499 [2024-07-21 01:35:20.217967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3713.738 ms 00:25:35.499 [2024-07-21 01:35:20.217981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.229023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.229075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:35.499 [2024-07-21 01:35:20.229091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.954 ms 00:25:35.499 [2024-07-21 01:35:20.229115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.229179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.229201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:35.499 [2024-07-21 01:35:20.229213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:35.499 [2024-07-21 01:35:20.229226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.240027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.240079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:35.499 [2024-07-21 01:35:20.240093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.772 ms 00:25:35.499 [2024-07-21 01:35:20.240106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.240143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.240156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:35.499 [2024-07-21 01:35:20.240167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:35.499 [2024-07-21 01:35:20.240180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.240659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.240677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:35.499 [2024-07-21 01:35:20.240688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:25:35.499 [2024-07-21 01:35:20.240701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.240763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.240785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:35.499 [2024-07-21 01:35:20.240797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:35.499 [2024-07-21 01:35:20.240809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.248160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.248199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:35.499 [2024-07-21 01:35:20.248211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.342 ms 00:25:35.499 [2024-07-21 01:35:20.248232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.255791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:35.499 [2024-07-21 01:35:20.256887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.256913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:35.499 [2024-07-21 01:35:20.256929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.595 ms 00:25:35.499 [2024-07-21 01:35:20.256939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.284187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.284230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:35.499 [2024-07-21 01:35:20.284252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.258 ms 00:25:35.499 [2024-07-21 01:35:20.284264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.284364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.284378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:35.499 [2024-07-21 01:35:20.284395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:25:35.499 [2024-07-21 01:35:20.284406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.287425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.287475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:35.499 [2024-07-21 01:35:20.287498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.997 ms 00:25:35.499 [2024-07-21 01:35:20.287512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.290618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.290653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:35.499 [2024-07-21 01:35:20.290668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.071 ms 00:25:35.499 [2024-07-21 01:35:20.290678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.290963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.290980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:35.499 [2024-07-21 01:35:20.290994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:25:35.499 [2024-07-21 01:35:20.291004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.332898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.332947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:35.499 [2024-07-21 01:35:20.332965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.932 ms 00:25:35.499 [2024-07-21 01:35:20.332979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.337485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.337528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:35.499 [2024-07-21 01:35:20.337544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.470 ms 00:25:35.499 [2024-07-21 01:35:20.337554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.340944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.340976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:35.499 [2024-07-21 01:35:20.340991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.353 ms 00:25:35.499 [2024-07-21 01:35:20.341001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.344677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.344722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:35.499 [2024-07-21 01:35:20.344738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.640 ms 00:25:35.499 [2024-07-21 01:35:20.344748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.344795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.344814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:35.499 [2024-07-21 01:35:20.344844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:35.499 [2024-07-21 01:35:20.344861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.344929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.499 [2024-07-21 01:35:20.344941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:35.499 [2024-07-21 01:35:20.344954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:35.499 [2024-07-21 01:35:20.344964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.499 [2024-07-21 01:35:20.345990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3862.802 ms, result 0 00:25:35.499 { 00:25:35.499 "name": "ftl", 00:25:35.499 "uuid": "cb75ed14-a64f-403e-9d6e-3604ef5cc910" 00:25:35.499 } 00:25:35.500 01:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:35.500 [2024-07-21 01:35:20.542193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.500 01:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:35.500 01:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:35.758 [2024-07-21 01:35:20.906121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:35.758 01:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:36.017 [2024-07-21 01:35:21.090308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:36.017 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:36.275 Fill FTL, iteration 1 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94654 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94654 /var/tmp/spdk.tgt.sock 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94654 ']' 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:36.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:36.275 01:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:36.275 [2024-07-21 01:35:21.490111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:36.275 [2024-07-21 01:35:21.490224] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94654 ] 00:25:36.534 [2024-07-21 01:35:21.659124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.534 [2024-07-21 01:35:21.722480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.101 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:37.101 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:37.101 01:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:37.359 ftln1 00:25:37.359 01:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:37.359 01:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94654 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94654 ']' 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94654 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94654 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:37.616 killing process with pid 94654 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94654' 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94654 00:25:37.616 01:35:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94654 00:25:38.182 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:38.182 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:38.182 [2024-07-21 01:35:23.432238] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:38.182 [2024-07-21 01:35:23.432860] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94685 ] 00:25:38.440 [2024-07-21 01:35:23.606070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.440 [2024-07-21 01:35:23.672271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.900  Copying: 248/1024 [MB] (248 MBps) Copying: 500/1024 [MB] (252 MBps) Copying: 757/1024 [MB] (257 MBps) Copying: 1012/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:25:42.900 00:25:43.159 Calculate MD5 checksum, iteration 1 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:43.159 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:43.159 [2024-07-21 01:35:28.297597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:43.159 [2024-07-21 01:35:28.297724] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94732 ] 00:25:43.159 [2024-07-21 01:35:28.465039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.418 [2024-07-21 01:35:28.506793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.317  Copying: 625/1024 [MB] (625 MBps) Copying: 1024/1024 [MB] (average 613 MBps) 00:25:45.317 00:25:45.578 01:35:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:45.578 01:35:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:47.015 Fill FTL, iteration 2 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c77417096b527e8ff1e5d3b22ba5453a 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:47.015 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:47.016 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:47.016 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:47.016 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:47.016 01:35:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:47.274 [2024-07-21 01:35:32.398601] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:47.274 [2024-07-21 01:35:32.398933] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94782 ] 00:25:47.274 [2024-07-21 01:35:32.566409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.531 [2024-07-21 01:35:32.607392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.069  Copying: 251/1024 [MB] (251 MBps) Copying: 494/1024 [MB] (243 MBps) Copying: 739/1024 [MB] (245 MBps) Copying: 981/1024 [MB] (242 MBps) Copying: 1024/1024 [MB] (average 244 MBps) 00:25:52.069 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:52.069 Calculate MD5 checksum, iteration 2 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:52.069 01:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:52.069 [2024-07-21 01:35:37.317291] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:52.069 [2024-07-21 01:35:37.317409] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94835 ] 00:25:52.329 [2024-07-21 01:35:37.482935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.329 [2024-07-21 01:35:37.525291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.213  Copying: 645/1024 [MB] (645 MBps) Copying: 1024/1024 [MB] (average 649 MBps) 00:25:55.213 00:25:55.213 01:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:55.213 01:35:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=742161a7f9e59ac5a9d98ce2befa50a7 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:57.120 [2024-07-21 01:35:42.365006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.120 [2024-07-21 01:35:42.365067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:57.120 [2024-07-21 01:35:42.365094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:57.120 [2024-07-21 01:35:42.365105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.120 [2024-07-21 01:35:42.365133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.120 [2024-07-21 01:35:42.365158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:57.120 [2024-07-21 01:35:42.365170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:57.120 [2024-07-21 01:35:42.365193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.120 [2024-07-21 01:35:42.365216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.120 [2024-07-21 01:35:42.365228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:57.120 [2024-07-21 01:35:42.365239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:57.120 [2024-07-21 01:35:42.365250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.120 [2024-07-21 01:35:42.365327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.314 ms, result 0 00:25:57.120 true 00:25:57.120 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:57.380 { 00:25:57.380 "name": "ftl", 00:25:57.380 "properties": [ 00:25:57.380 { 00:25:57.380 "name": "superblock_version", 00:25:57.380 "value": 5, 00:25:57.380 "read-only": true 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "name": "base_device", 00:25:57.380 "bands": [ 00:25:57.380 { 00:25:57.380 "id": 0, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 1, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 2, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 3, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 4, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 5, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 6, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 7, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 8, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 9, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 10, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 11, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 12, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 13, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 14, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 15, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 16, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 17, 00:25:57.380 "state": "FREE", 00:25:57.380 "validity": 0.0 00:25:57.380 } 00:25:57.380 ], 00:25:57.380 "read-only": true 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "name": "cache_device", 00:25:57.380 "type": "bdev", 00:25:57.380 "chunks": [ 00:25:57.380 { 00:25:57.380 "id": 0, 00:25:57.380 "state": "INACTIVE", 00:25:57.380 "utilization": 0.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 1, 00:25:57.380 "state": "CLOSED", 00:25:57.380 "utilization": 1.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 2, 00:25:57.380 "state": "CLOSED", 00:25:57.380 "utilization": 1.0 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 3, 00:25:57.380 "state": "OPEN", 00:25:57.380 "utilization": 0.001953125 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "id": 4, 00:25:57.380 "state": "OPEN", 00:25:57.380 "utilization": 0.0 00:25:57.380 } 00:25:57.380 ], 00:25:57.380 "read-only": true 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "name": "verbose_mode", 00:25:57.380 "value": true, 00:25:57.380 "unit": "", 00:25:57.380 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:57.380 }, 00:25:57.380 { 00:25:57.380 "name": "prep_upgrade_on_shutdown", 00:25:57.380 "value": false, 00:25:57.380 "unit": "", 00:25:57.380 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:57.380 } 00:25:57.380 ] 00:25:57.380 } 00:25:57.380 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:57.641 [2024-07-21 01:35:42.756941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.641 [2024-07-21 01:35:42.756992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:57.641 [2024-07-21 01:35:42.757007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:57.641 [2024-07-21 01:35:42.757018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.641 [2024-07-21 01:35:42.757044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.641 [2024-07-21 01:35:42.757056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:57.641 [2024-07-21 01:35:42.757067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:57.641 [2024-07-21 01:35:42.757077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.641 [2024-07-21 01:35:42.757097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.641 [2024-07-21 01:35:42.757109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:57.641 [2024-07-21 01:35:42.757120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:57.641 [2024-07-21 01:35:42.757130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.641 [2024-07-21 01:35:42.757189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.242 ms, result 0 00:25:57.641 true 00:25:57.641 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:57.641 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:57.641 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:57.899 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:57.899 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:57.899 01:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:57.899 [2024-07-21 01:35:43.132994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.899 [2024-07-21 01:35:43.133058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:57.899 [2024-07-21 01:35:43.133076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:57.899 [2024-07-21 01:35:43.133088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.899 [2024-07-21 01:35:43.133116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.899 [2024-07-21 01:35:43.133128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:57.899 [2024-07-21 01:35:43.133139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:57.899 [2024-07-21 01:35:43.133149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.899 [2024-07-21 01:35:43.133170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.899 [2024-07-21 01:35:43.133182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:57.899 [2024-07-21 01:35:43.133194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:57.899 [2024-07-21 01:35:43.133204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.899 [2024-07-21 01:35:43.133269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.271 ms, result 0 00:25:57.899 true 00:25:57.899 01:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:58.157 { 00:25:58.158 "name": "ftl", 00:25:58.158 "properties": [ 00:25:58.158 { 00:25:58.158 "name": "superblock_version", 00:25:58.158 "value": 5, 00:25:58.158 "read-only": true 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "name": "base_device", 00:25:58.158 "bands": [ 00:25:58.158 { 00:25:58.158 "id": 0, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 1, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 2, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 3, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 4, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 5, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 6, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 7, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 8, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 9, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 10, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 11, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 12, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 13, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 14, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 15, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 16, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 17, 00:25:58.158 "state": "FREE", 00:25:58.158 "validity": 0.0 00:25:58.158 } 00:25:58.158 ], 00:25:58.158 "read-only": true 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "name": "cache_device", 00:25:58.158 "type": "bdev", 00:25:58.158 "chunks": [ 00:25:58.158 { 00:25:58.158 "id": 0, 00:25:58.158 "state": "INACTIVE", 00:25:58.158 "utilization": 0.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 1, 00:25:58.158 "state": "CLOSED", 00:25:58.158 "utilization": 1.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 2, 00:25:58.158 "state": "CLOSED", 00:25:58.158 "utilization": 1.0 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 3, 00:25:58.158 "state": "OPEN", 00:25:58.158 "utilization": 0.001953125 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "id": 4, 00:25:58.158 "state": "OPEN", 00:25:58.158 "utilization": 0.0 00:25:58.158 } 00:25:58.158 ], 00:25:58.158 "read-only": true 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "name": "verbose_mode", 00:25:58.158 "value": true, 00:25:58.158 "unit": "", 00:25:58.158 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:58.158 }, 00:25:58.158 { 00:25:58.158 "name": "prep_upgrade_on_shutdown", 00:25:58.158 "value": true, 00:25:58.158 "unit": "", 00:25:58.158 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:58.158 } 00:25:58.158 ] 00:25:58.158 } 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94532 ]] 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94532 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94532 ']' 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94532 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94532 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:58.158 killing process with pid 94532 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94532' 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94532 00:25:58.158 01:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94532 00:25:58.416 [2024-07-21 01:35:43.607051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:58.416 [2024-07-21 01:35:43.614320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.416 [2024-07-21 01:35:43.614365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:58.416 [2024-07-21 01:35:43.614397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:58.416 [2024-07-21 01:35:43.614409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:58.416 [2024-07-21 01:35:43.614434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:58.416 [2024-07-21 01:35:43.615546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:58.416 [2024-07-21 01:35:43.615579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:58.416 [2024-07-21 01:35:43.615591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.096 ms 00:25:58.416 [2024-07-21 01:35:43.615601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.774716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.774804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:06.540 [2024-07-21 01:35:50.774840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7170.703 ms 00:26:06.540 [2024-07-21 01:35:50.774851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.775881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.775927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:06.540 [2024-07-21 01:35:50.775940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.006 ms 00:26:06.540 [2024-07-21 01:35:50.775951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.776897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.776922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:06.540 [2024-07-21 01:35:50.776935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:26:06.540 [2024-07-21 01:35:50.776946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.779067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.779107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:06.540 [2024-07-21 01:35:50.779120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.069 ms 00:26:06.540 [2024-07-21 01:35:50.779131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.781958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.782001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:06.540 [2024-07-21 01:35:50.782030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.801 ms 00:26:06.540 [2024-07-21 01:35:50.782041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.782117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.782129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:06.540 [2024-07-21 01:35:50.782141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:06.540 [2024-07-21 01:35:50.782157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.783602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.783637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:06.540 [2024-07-21 01:35:50.783665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.429 ms 00:26:06.540 [2024-07-21 01:35:50.783686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.785203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.785237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:06.540 [2024-07-21 01:35:50.785250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.492 ms 00:26:06.540 [2024-07-21 01:35:50.785260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.786605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.786637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:06.540 [2024-07-21 01:35:50.786649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:26:06.540 [2024-07-21 01:35:50.786658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.787968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.788013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:06.540 [2024-07-21 01:35:50.788025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.238 ms 00:26:06.540 [2024-07-21 01:35:50.788034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.788062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:06.540 [2024-07-21 01:35:50.788078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:06.540 [2024-07-21 01:35:50.788092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:06.540 [2024-07-21 01:35:50.788104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:06.540 [2024-07-21 01:35:50.788115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.540 [2024-07-21 01:35:50.788313] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:06.540 [2024-07-21 01:35:50.788324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cb75ed14-a64f-403e-9d6e-3604ef5cc910 00:26:06.540 [2024-07-21 01:35:50.788335] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:06.540 [2024-07-21 01:35:50.788355] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:06.540 [2024-07-21 01:35:50.788365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:06.540 [2024-07-21 01:35:50.788376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:06.540 [2024-07-21 01:35:50.788386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:06.540 [2024-07-21 01:35:50.788396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:06.540 [2024-07-21 01:35:50.788411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:06.540 [2024-07-21 01:35:50.788420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:06.540 [2024-07-21 01:35:50.788431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:06.540 [2024-07-21 01:35:50.788442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.788453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:06.540 [2024-07-21 01:35:50.788472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.382 ms 00:26:06.540 [2024-07-21 01:35:50.788482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.791174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.791198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:06.540 [2024-07-21 01:35:50.791209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.678 ms 00:26:06.540 [2024-07-21 01:35:50.791235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.791400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.540 [2024-07-21 01:35:50.791412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:06.540 [2024-07-21 01:35:50.791423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.140 ms 00:26:06.540 [2024-07-21 01:35:50.791433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.801747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.801778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:06.540 [2024-07-21 01:35:50.801806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.801823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.801916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.801929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:06.540 [2024-07-21 01:35:50.801940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.801951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.802022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.802035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:06.540 [2024-07-21 01:35:50.802047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.802057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.802089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.802101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:06.540 [2024-07-21 01:35:50.802112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.802122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.820213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.820250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:06.540 [2024-07-21 01:35:50.820263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.820274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.832914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.832948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:06.540 [2024-07-21 01:35:50.832977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.832989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:06.540 [2024-07-21 01:35:50.833096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:06.540 [2024-07-21 01:35:50.833185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:06.540 [2024-07-21 01:35:50.833306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:06.540 [2024-07-21 01:35:50.833392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:06.540 [2024-07-21 01:35:50.833476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.540 [2024-07-21 01:35:50.833562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:06.540 [2024-07-21 01:35:50.833573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.540 [2024-07-21 01:35:50.833582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.540 [2024-07-21 01:35:50.833740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7231.084 ms, result 0 00:26:07.918 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:07.918 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:07.918 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:07.918 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:07.918 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95000 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95000 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95000 ']' 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:07.919 01:35:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:08.176 [2024-07-21 01:35:53.312508] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:08.176 [2024-07-21 01:35:53.312635] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95000 ] 00:26:08.176 [2024-07-21 01:35:53.480483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.434 [2024-07-21 01:35:53.545352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.692 [2024-07-21 01:35:53.946706] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:08.692 [2024-07-21 01:35:53.946790] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:08.951 [2024-07-21 01:35:54.084917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.951 [2024-07-21 01:35:54.084967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:08.951 [2024-07-21 01:35:54.084983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:08.951 [2024-07-21 01:35:54.084993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.951 [2024-07-21 01:35:54.085063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.951 [2024-07-21 01:35:54.085078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:08.951 [2024-07-21 01:35:54.085088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:08.951 [2024-07-21 01:35:54.085105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.951 [2024-07-21 01:35:54.085137] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:08.951 [2024-07-21 01:35:54.085370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:08.951 [2024-07-21 01:35:54.085395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.951 [2024-07-21 01:35:54.085406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:08.951 [2024-07-21 01:35:54.085417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:26:08.951 [2024-07-21 01:35:54.085426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.951 [2024-07-21 01:35:54.087786] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:08.951 [2024-07-21 01:35:54.091342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.951 [2024-07-21 01:35:54.091378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:08.952 [2024-07-21 01:35:54.091394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.559 ms 00:26:08.952 [2024-07-21 01:35:54.091412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.091483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.091496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:08.952 [2024-07-21 01:35:54.091525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:08.952 [2024-07-21 01:35:54.091534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.103327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.103354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:08.952 [2024-07-21 01:35:54.103372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.742 ms 00:26:08.952 [2024-07-21 01:35:54.103383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.103429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.103441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:08.952 [2024-07-21 01:35:54.103451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:08.952 [2024-07-21 01:35:54.103465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.103522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.103533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:08.952 [2024-07-21 01:35:54.103543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:08.952 [2024-07-21 01:35:54.103552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.103579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:08.952 [2024-07-21 01:35:54.106205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.106230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:08.952 [2024-07-21 01:35:54.106241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.637 ms 00:26:08.952 [2024-07-21 01:35:54.106258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.106290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.106300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:08.952 [2024-07-21 01:35:54.106310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:08.952 [2024-07-21 01:35:54.106320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.106353] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:08.952 [2024-07-21 01:35:54.106379] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:08.952 [2024-07-21 01:35:54.106414] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:08.952 [2024-07-21 01:35:54.106434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:08.952 [2024-07-21 01:35:54.106521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:08.952 [2024-07-21 01:35:54.106534] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:08.952 [2024-07-21 01:35:54.106547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:08.952 [2024-07-21 01:35:54.106577] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:08.952 [2024-07-21 01:35:54.106588] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:08.952 [2024-07-21 01:35:54.106599] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:08.952 [2024-07-21 01:35:54.106609] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:08.952 [2024-07-21 01:35:54.106623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:08.952 [2024-07-21 01:35:54.106636] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:08.952 [2024-07-21 01:35:54.106647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.106657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:08.952 [2024-07-21 01:35:54.106668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:26:08.952 [2024-07-21 01:35:54.106678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.106746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.952 [2024-07-21 01:35:54.106763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:08.952 [2024-07-21 01:35:54.106780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:08.952 [2024-07-21 01:35:54.106790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.952 [2024-07-21 01:35:54.106888] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:08.952 [2024-07-21 01:35:54.106909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:08.952 [2024-07-21 01:35:54.106920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:08.952 [2024-07-21 01:35:54.106931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.106941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:08.952 [2024-07-21 01:35:54.106953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.106964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:08.952 [2024-07-21 01:35:54.106973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:08.952 [2024-07-21 01:35:54.106983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:08.952 [2024-07-21 01:35:54.106992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:08.952 [2024-07-21 01:35:54.107011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:08.952 [2024-07-21 01:35:54.107020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:08.952 [2024-07-21 01:35:54.107040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:08.952 [2024-07-21 01:35:54.107049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:08.952 [2024-07-21 01:35:54.107067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:08.952 [2024-07-21 01:35:54.107075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:08.952 [2024-07-21 01:35:54.107094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:08.952 [2024-07-21 01:35:54.107124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:08.952 [2024-07-21 01:35:54.107150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:08.952 [2024-07-21 01:35:54.107178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:08.952 [2024-07-21 01:35:54.107204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:08.952 [2024-07-21 01:35:54.107230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:08.952 [2024-07-21 01:35:54.107262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:08.952 [2024-07-21 01:35:54.107288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:08.952 [2024-07-21 01:35:54.107296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107314] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:08.952 [2024-07-21 01:35:54.107324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:08.952 [2024-07-21 01:35:54.107334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:08.952 [2024-07-21 01:35:54.107355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:08.952 [2024-07-21 01:35:54.107364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:08.952 [2024-07-21 01:35:54.107374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:08.952 [2024-07-21 01:35:54.107383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:08.952 [2024-07-21 01:35:54.107392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:08.952 [2024-07-21 01:35:54.107402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:08.952 [2024-07-21 01:35:54.107419] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:08.952 [2024-07-21 01:35:54.107433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.952 [2024-07-21 01:35:54.107443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:08.952 [2024-07-21 01:35:54.107454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:08.952 [2024-07-21 01:35:54.107465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:08.952 [2024-07-21 01:35:54.107475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:08.952 [2024-07-21 01:35:54.107485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:08.952 [2024-07-21 01:35:54.107495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:08.952 [2024-07-21 01:35:54.107505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:08.953 [2024-07-21 01:35:54.107516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:08.953 [2024-07-21 01:35:54.107588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:08.953 [2024-07-21 01:35:54.107602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:08.953 [2024-07-21 01:35:54.107623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:08.953 [2024-07-21 01:35:54.107633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:08.953 [2024-07-21 01:35:54.107643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:08.953 [2024-07-21 01:35:54.107653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.953 [2024-07-21 01:35:54.107663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:08.953 [2024-07-21 01:35:54.107673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:26:08.953 [2024-07-21 01:35:54.107686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.953 [2024-07-21 01:35:54.107742] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:08.953 [2024-07-21 01:35:54.107756] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:13.186 [2024-07-21 01:35:57.892027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.892100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:13.186 [2024-07-21 01:35:57.892131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3790.427 ms 00:26:13.186 [2024-07-21 01:35:57.892142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.910779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.910838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:13.186 [2024-07-21 01:35:57.910854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.539 ms 00:26:13.186 [2024-07-21 01:35:57.910865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.910969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.910982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:13.186 [2024-07-21 01:35:57.911007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:13.186 [2024-07-21 01:35:57.911017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.927717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.927756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:13.186 [2024-07-21 01:35:57.927771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.684 ms 00:26:13.186 [2024-07-21 01:35:57.927782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.927835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.927853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:13.186 [2024-07-21 01:35:57.927863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:13.186 [2024-07-21 01:35:57.927873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.928620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.928644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:13.186 [2024-07-21 01:35:57.928656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.697 ms 00:26:13.186 [2024-07-21 01:35:57.928666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.928712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.928734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:13.186 [2024-07-21 01:35:57.928766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:13.186 [2024-07-21 01:35:57.928786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.940624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.940660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:13.186 [2024-07-21 01:35:57.940674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.832 ms 00:26:13.186 [2024-07-21 01:35:57.940684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.944426] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:13.186 [2024-07-21 01:35:57.944469] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:13.186 [2024-07-21 01:35:57.944486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.944497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:13.186 [2024-07-21 01:35:57.944508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.676 ms 00:26:13.186 [2024-07-21 01:35:57.944517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.947994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.948028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:13.186 [2024-07-21 01:35:57.948047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.441 ms 00:26:13.186 [2024-07-21 01:35:57.948057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.949422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.949454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:13.186 [2024-07-21 01:35:57.949466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.322 ms 00:26:13.186 [2024-07-21 01:35:57.949475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.950888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.950915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:13.186 [2024-07-21 01:35:57.950926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.378 ms 00:26:13.186 [2024-07-21 01:35:57.950937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.951226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.186 [2024-07-21 01:35:57.951243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:13.186 [2024-07-21 01:35:57.951255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:26:13.186 [2024-07-21 01:35:57.951270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.186 [2024-07-21 01:35:57.999276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:57.999342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:13.187 [2024-07-21 01:35:57.999366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.058 ms 00:26:13.187 [2024-07-21 01:35:57.999382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.006336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:13.187 [2024-07-21 01:35:58.007114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.007139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:13.187 [2024-07-21 01:35:58.007151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.682 ms 00:26:13.187 [2024-07-21 01:35:58.007162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.007231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.007245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:13.187 [2024-07-21 01:35:58.007257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:13.187 [2024-07-21 01:35:58.007267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.007323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.007341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:13.187 [2024-07-21 01:35:58.007353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:13.187 [2024-07-21 01:35:58.007372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.007399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.007418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:13.187 [2024-07-21 01:35:58.007430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:13.187 [2024-07-21 01:35:58.007441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.007481] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:13.187 [2024-07-21 01:35:58.007495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.007508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:13.187 [2024-07-21 01:35:58.007519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:13.187 [2024-07-21 01:35:58.007529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.011218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.011263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:13.187 [2024-07-21 01:35:58.011278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.672 ms 00:26:13.187 [2024-07-21 01:35:58.011289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.011363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.011375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:13.187 [2024-07-21 01:35:58.011394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:13.187 [2024-07-21 01:35:58.011404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.012942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3933.854 ms, result 0 00:26:13.187 [2024-07-21 01:35:58.027548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.187 [2024-07-21 01:35:58.043508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:13.187 [2024-07-21 01:35:58.051641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:13.187 [2024-07-21 01:35:58.267287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.267334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:13.187 [2024-07-21 01:35:58.267349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:13.187 [2024-07-21 01:35:58.267360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.267383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.267395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:13.187 [2024-07-21 01:35:58.267406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:13.187 [2024-07-21 01:35:58.267416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.267439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.187 [2024-07-21 01:35:58.267450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:13.187 [2024-07-21 01:35:58.267469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:13.187 [2024-07-21 01:35:58.267479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.187 [2024-07-21 01:35:58.267546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.248 ms, result 0 00:26:13.187 true 00:26:13.187 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:13.187 { 00:26:13.187 "name": "ftl", 00:26:13.187 "properties": [ 00:26:13.187 { 00:26:13.187 "name": "superblock_version", 00:26:13.187 "value": 5, 00:26:13.187 "read-only": true 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "name": "base_device", 00:26:13.187 "bands": [ 00:26:13.187 { 00:26:13.187 "id": 0, 00:26:13.187 "state": "CLOSED", 00:26:13.187 "validity": 1.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 1, 00:26:13.187 "state": "CLOSED", 00:26:13.187 "validity": 1.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 2, 00:26:13.187 "state": "CLOSED", 00:26:13.187 "validity": 0.007843137254901933 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 3, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 4, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 5, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 6, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 7, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 8, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 9, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 10, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 11, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 12, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 13, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 14, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 15, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 16, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 17, 00:26:13.187 "state": "FREE", 00:26:13.187 "validity": 0.0 00:26:13.187 } 00:26:13.187 ], 00:26:13.187 "read-only": true 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "name": "cache_device", 00:26:13.187 "type": "bdev", 00:26:13.187 "chunks": [ 00:26:13.187 { 00:26:13.187 "id": 0, 00:26:13.187 "state": "INACTIVE", 00:26:13.187 "utilization": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 1, 00:26:13.187 "state": "OPEN", 00:26:13.187 "utilization": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 2, 00:26:13.187 "state": "OPEN", 00:26:13.187 "utilization": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 3, 00:26:13.187 "state": "FREE", 00:26:13.187 "utilization": 0.0 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "id": 4, 00:26:13.187 "state": "FREE", 00:26:13.187 "utilization": 0.0 00:26:13.187 } 00:26:13.187 ], 00:26:13.187 "read-only": true 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "name": "verbose_mode", 00:26:13.187 "value": true, 00:26:13.187 "unit": "", 00:26:13.187 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:13.187 }, 00:26:13.187 { 00:26:13.187 "name": "prep_upgrade_on_shutdown", 00:26:13.187 "value": false, 00:26:13.187 "unit": "", 00:26:13.188 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:13.188 } 00:26:13.188 ] 00:26:13.188 } 00:26:13.188 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:13.188 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:13.188 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:13.446 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:13.446 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:13.446 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:13.446 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:13.446 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:13.704 Validate MD5 checksum, iteration 1 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:13.704 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:13.705 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:13.705 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:13.705 01:35:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:13.705 [2024-07-21 01:35:58.982289] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:13.705 [2024-07-21 01:35:58.982435] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95073 ] 00:26:13.963 [2024-07-21 01:35:59.138230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.963 [2024-07-21 01:35:59.180646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.839  Copying: 624/1024 [MB] (624 MBps) Copying: 1024/1024 [MB] (average 617 MBps) 00:26:16.839 00:26:16.839 01:36:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:16.839 01:36:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.741 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c77417096b527e8ff1e5d3b22ba5453a 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c77417096b527e8ff1e5d3b22ba5453a != \c\7\7\4\1\7\0\9\6\b\5\2\7\e\8\f\f\1\e\5\d\3\b\2\2\b\a\5\4\5\3\a ]] 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:18.742 Validate MD5 checksum, iteration 2 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.742 01:36:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.742 [2024-07-21 01:36:03.814019] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:18.742 [2024-07-21 01:36:03.814129] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95129 ] 00:26:18.742 [2024-07-21 01:36:03.986108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.742 [2024-07-21 01:36:04.028802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.420  Copying: 626/1024 [MB] (626 MBps) Copying: 1024/1024 [MB] (average 622 MBps) 00:26:22.420 00:26:22.420 01:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:22.420 01:36:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=742161a7f9e59ac5a9d98ce2befa50a7 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 742161a7f9e59ac5a9d98ce2befa50a7 != \7\4\2\1\6\1\a\7\f\9\e\5\9\a\c\5\a\9\d\9\8\c\e\2\b\e\f\a\5\0\a\7 ]] 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 95000 ]] 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 95000 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95186 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95186 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 95186 ']' 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:24.324 01:36:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:24.324 [2024-07-21 01:36:09.378618] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:24.324 [2024-07-21 01:36:09.378735] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95186 ] 00:26:24.324 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 95000 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:24.324 [2024-07-21 01:36:09.546918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.324 [2024-07-21 01:36:09.617755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.893 [2024-07-21 01:36:10.021354] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:24.893 [2024-07-21 01:36:10.021434] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:24.893 [2024-07-21 01:36:10.160419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.160476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:24.893 [2024-07-21 01:36:10.160501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:24.893 [2024-07-21 01:36:10.160518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.160620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.160640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:24.893 [2024-07-21 01:36:10.160656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:26:24.893 [2024-07-21 01:36:10.160688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.160738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:24.893 [2024-07-21 01:36:10.161032] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:24.893 [2024-07-21 01:36:10.161063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.161080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:24.893 [2024-07-21 01:36:10.161099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:26:24.893 [2024-07-21 01:36:10.161115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.161577] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:24.893 [2024-07-21 01:36:10.165640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.165682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:24.893 [2024-07-21 01:36:10.165708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.072 ms 00:26:24.893 [2024-07-21 01:36:10.165724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.166753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.166789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:24.893 [2024-07-21 01:36:10.166811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:24.893 [2024-07-21 01:36:10.166838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.167430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.167457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:24.893 [2024-07-21 01:36:10.167477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:26:24.893 [2024-07-21 01:36:10.167492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.167554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.167573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:24.893 [2024-07-21 01:36:10.167590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:24.893 [2024-07-21 01:36:10.167606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.893 [2024-07-21 01:36:10.167655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.893 [2024-07-21 01:36:10.167673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:24.893 [2024-07-21 01:36:10.167689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:24.893 [2024-07-21 01:36:10.167704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.167741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:24.894 [2024-07-21 01:36:10.168750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.168780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:24.894 [2024-07-21 01:36:10.168806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:26:24.894 [2024-07-21 01:36:10.168840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.168885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.168903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:24.894 [2024-07-21 01:36:10.168920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.894 [2024-07-21 01:36:10.168934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.168985] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:24.894 [2024-07-21 01:36:10.169019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:24.894 [2024-07-21 01:36:10.169066] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:24.894 [2024-07-21 01:36:10.169098] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:24.894 [2024-07-21 01:36:10.169200] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:24.894 [2024-07-21 01:36:10.169222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:24.894 [2024-07-21 01:36:10.169243] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:24.894 [2024-07-21 01:36:10.169262] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:24.894 [2024-07-21 01:36:10.169280] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:24.894 [2024-07-21 01:36:10.169297] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:24.894 [2024-07-21 01:36:10.169316] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:24.894 [2024-07-21 01:36:10.169332] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:24.894 [2024-07-21 01:36:10.169347] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:24.894 [2024-07-21 01:36:10.169368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.169383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:24.894 [2024-07-21 01:36:10.169400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:26:24.894 [2024-07-21 01:36:10.169415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.169520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.169546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:24.894 [2024-07-21 01:36:10.169564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:26:24.894 [2024-07-21 01:36:10.169580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.169686] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:24.894 [2024-07-21 01:36:10.169712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:24.894 [2024-07-21 01:36:10.169728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.894 [2024-07-21 01:36:10.169744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.169764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:24.894 [2024-07-21 01:36:10.169779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.169793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:24.894 [2024-07-21 01:36:10.169808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:24.894 [2024-07-21 01:36:10.169837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:24.894 [2024-07-21 01:36:10.169854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.169868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:24.894 [2024-07-21 01:36:10.169883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:24.894 [2024-07-21 01:36:10.169898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.169912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:24.894 [2024-07-21 01:36:10.169926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:24.894 [2024-07-21 01:36:10.169940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.169955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:24.894 [2024-07-21 01:36:10.169969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:24.894 [2024-07-21 01:36:10.169985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:24.894 [2024-07-21 01:36:10.170018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:24.894 [2024-07-21 01:36:10.170062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:24.894 [2024-07-21 01:36:10.170105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:24.894 [2024-07-21 01:36:10.170147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:24.894 [2024-07-21 01:36:10.170195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:24.894 [2024-07-21 01:36:10.170238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:24.894 [2024-07-21 01:36:10.170287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:24.894 [2024-07-21 01:36:10.170346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:24.894 [2024-07-21 01:36:10.170361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170375] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:24.894 [2024-07-21 01:36:10.170391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:24.894 [2024-07-21 01:36:10.170408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:24.894 [2024-07-21 01:36:10.170440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:24.894 [2024-07-21 01:36:10.170455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:24.894 [2024-07-21 01:36:10.170470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:24.894 [2024-07-21 01:36:10.170486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:24.894 [2024-07-21 01:36:10.170501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:24.894 [2024-07-21 01:36:10.170523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:24.894 [2024-07-21 01:36:10.170541] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:24.894 [2024-07-21 01:36:10.170571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:24.894 [2024-07-21 01:36:10.170605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:24.894 [2024-07-21 01:36:10.170653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:24.894 [2024-07-21 01:36:10.170670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:24.894 [2024-07-21 01:36:10.170685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:24.894 [2024-07-21 01:36:10.170702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:24.894 [2024-07-21 01:36:10.170835] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:24.894 [2024-07-21 01:36:10.170868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.894 [2024-07-21 01:36:10.170903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:24.894 [2024-07-21 01:36:10.170920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:24.894 [2024-07-21 01:36:10.170936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:24.894 [2024-07-21 01:36:10.170953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.170969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:24.894 [2024-07-21 01:36:10.170986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.327 ms 00:26:24.894 [2024-07-21 01:36:10.171001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.181317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.181360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:24.894 [2024-07-21 01:36:10.181383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.252 ms 00:26:24.894 [2024-07-21 01:36:10.181398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.181450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.181473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:24.894 [2024-07-21 01:36:10.181490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:24.894 [2024-07-21 01:36:10.181505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.192263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.192302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:24.894 [2024-07-21 01:36:10.192328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.700 ms 00:26:24.894 [2024-07-21 01:36:10.192344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.192396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.192424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:24.894 [2024-07-21 01:36:10.192450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:24.894 [2024-07-21 01:36:10.192466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.192611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.192641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:24.894 [2024-07-21 01:36:10.192658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:26:24.894 [2024-07-21 01:36:10.192674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.192749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.192774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:24.894 [2024-07-21 01:36:10.192790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:24.894 [2024-07-21 01:36:10.192805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.894 [2024-07-21 01:36:10.200160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.894 [2024-07-21 01:36:10.200204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:24.894 [2024-07-21 01:36:10.200224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.309 ms 00:26:24.894 [2024-07-21 01:36:10.200240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.895 [2024-07-21 01:36:10.200386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.895 [2024-07-21 01:36:10.200407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:24.895 [2024-07-21 01:36:10.200425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:24.895 [2024-07-21 01:36:10.200440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.214420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.214461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:25.153 [2024-07-21 01:36:10.214484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.974 ms 00:26:25.153 [2024-07-21 01:36:10.214500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.215667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.215707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:25.153 [2024-07-21 01:36:10.215730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:26:25.153 [2024-07-21 01:36:10.215763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.235735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.235799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:25.153 [2024-07-21 01:36:10.235836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.960 ms 00:26:25.153 [2024-07-21 01:36:10.235853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.236105] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:25.153 [2024-07-21 01:36:10.236266] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:25.153 [2024-07-21 01:36:10.236405] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:25.153 [2024-07-21 01:36:10.236543] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:25.153 [2024-07-21 01:36:10.236566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.236590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:25.153 [2024-07-21 01:36:10.236609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.646 ms 00:26:25.153 [2024-07-21 01:36:10.236625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.236686] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:25.153 [2024-07-21 01:36:10.236728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.236746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:25.153 [2024-07-21 01:36:10.236763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:25.153 [2024-07-21 01:36:10.236780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.239521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.239563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:25.153 [2024-07-21 01:36:10.239584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.656 ms 00:26:25.153 [2024-07-21 01:36:10.239601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.240239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.153 [2024-07-21 01:36:10.240281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:25.153 [2024-07-21 01:36:10.240301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:25.153 [2024-07-21 01:36:10.240316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.153 [2024-07-21 01:36:10.240624] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:25.720 [2024-07-21 01:36:10.837130] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:25.720 [2024-07-21 01:36:10.837321] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:26.288 [2024-07-21 01:36:11.428309] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:26.288 [2024-07-21 01:36:11.428411] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:26.288 [2024-07-21 01:36:11.428429] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:26.288 [2024-07-21 01:36:11.428444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.288 [2024-07-21 01:36:11.428455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:26.288 [2024-07-21 01:36:11.428469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1189.975 ms 00:26:26.288 [2024-07-21 01:36:11.428480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.288 [2024-07-21 01:36:11.428514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.288 [2024-07-21 01:36:11.428526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:26.288 [2024-07-21 01:36:11.428537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:26.288 [2024-07-21 01:36:11.428554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.288 [2024-07-21 01:36:11.435511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:26.288 [2024-07-21 01:36:11.435642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.288 [2024-07-21 01:36:11.435656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:26.288 [2024-07-21 01:36:11.435668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.081 ms 00:26:26.288 [2024-07-21 01:36:11.435689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.288 [2024-07-21 01:36:11.436276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.288 [2024-07-21 01:36:11.436298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:26.289 [2024-07-21 01:36:11.436309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:26:26.289 [2024-07-21 01:36:11.436319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:26.289 [2024-07-21 01:36:11.438283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.908 ms 00:26:26.289 [2024-07-21 01:36:11.438293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:26.289 [2024-07-21 01:36:11.438389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:26.289 [2024-07-21 01:36:11.438398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:26.289 [2024-07-21 01:36:11.438551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:26.289 [2024-07-21 01:36:11.438561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:26.289 [2024-07-21 01:36:11.438607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:26.289 [2024-07-21 01:36:11.438624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:26.289 [2024-07-21 01:36:11.438669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:26.289 [2024-07-21 01:36:11.438699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:26.289 [2024-07-21 01:36:11.438709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.438755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.289 [2024-07-21 01:36:11.438766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:26.289 [2024-07-21 01:36:11.438777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:26.289 [2024-07-21 01:36:11.438789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.289 [2024-07-21 01:36:11.439751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1280.996 ms, result 0 00:26:26.289 [2024-07-21 01:36:11.452232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.289 [2024-07-21 01:36:11.468193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:26.289 [2024-07-21 01:36:11.476285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:26.857 Validate MD5 checksum, iteration 1 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:26.857 01:36:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:26.857 [2024-07-21 01:36:11.954153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:26.857 [2024-07-21 01:36:11.954286] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95221 ] 00:26:26.857 [2024-07-21 01:36:12.125673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.115 [2024-07-21 01:36:12.169715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.369  Copying: 677/1024 [MB] (677 MBps) Copying: 1024/1024 [MB] (average 667 MBps) 00:26:31.369 00:26:31.369 01:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:31.369 01:36:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:32.746 Validate MD5 checksum, iteration 2 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c77417096b527e8ff1e5d3b22ba5453a 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c77417096b527e8ff1e5d3b22ba5453a != \c\7\7\4\1\7\0\9\6\b\5\2\7\e\8\f\f\1\e\5\d\3\b\2\2\b\a\5\4\5\3\a ]] 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:32.746 01:36:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:32.746 [2024-07-21 01:36:18.008537] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:32.746 [2024-07-21 01:36:18.009483] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95288 ] 00:26:33.004 [2024-07-21 01:36:18.180039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.004 [2024-07-21 01:36:18.247110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.227  Copying: 661/1024 [MB] (661 MBps) Copying: 1024/1024 [MB] (average 645 MBps) 00:26:37.227 00:26:37.227 01:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:37.227 01:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=742161a7f9e59ac5a9d98ce2befa50a7 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 742161a7f9e59ac5a9d98ce2befa50a7 != \7\4\2\1\6\1\a\7\f\9\e\5\9\a\c\5\a\9\d\9\8\c\e\2\b\e\f\a\5\0\a\7 ]] 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:38.602 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:38.603 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:38.603 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:38.603 01:36:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95186 ]] 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95186 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 95186 ']' 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 95186 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95186 00:26:38.862 killing process with pid 95186 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95186' 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 95186 00:26:38.862 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 95186 00:26:39.122 [2024-07-21 01:36:24.263770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:39.122 [2024-07-21 01:36:24.271353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.271395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:39.122 [2024-07-21 01:36:24.271412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:39.122 [2024-07-21 01:36:24.271424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.271449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:39.122 [2024-07-21 01:36:24.272632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.272653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:39.122 [2024-07-21 01:36:24.272673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.170 ms 00:26:39.122 [2024-07-21 01:36:24.272684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.272917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.272931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:39.122 [2024-07-21 01:36:24.272943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:26:39.122 [2024-07-21 01:36:24.272958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.274143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.274177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:39.122 [2024-07-21 01:36:24.274200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.168 ms 00:26:39.122 [2024-07-21 01:36:24.274210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.275229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.275259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:39.122 [2024-07-21 01:36:24.275272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.935 ms 00:26:39.122 [2024-07-21 01:36:24.275288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.276660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.276699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:39.122 [2024-07-21 01:36:24.276712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.333 ms 00:26:39.122 [2024-07-21 01:36:24.276732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.278425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.278463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:39.122 [2024-07-21 01:36:24.278476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.663 ms 00:26:39.122 [2024-07-21 01:36:24.278492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.278564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.278576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:39.122 [2024-07-21 01:36:24.278589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:39.122 [2024-07-21 01:36:24.278599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.280061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.280096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:39.122 [2024-07-21 01:36:24.280108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.444 ms 00:26:39.122 [2024-07-21 01:36:24.280118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.281646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.281681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:39.122 [2024-07-21 01:36:24.281692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.499 ms 00:26:39.122 [2024-07-21 01:36:24.281703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.283014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.283049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:39.122 [2024-07-21 01:36:24.283061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.284 ms 00:26:39.122 [2024-07-21 01:36:24.283071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.284276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.284310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:39.122 [2024-07-21 01:36:24.284322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:26:39.122 [2024-07-21 01:36:24.284332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.284362] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:39.122 [2024-07-21 01:36:24.284392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:39.122 [2024-07-21 01:36:24.284410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:39.122 [2024-07-21 01:36:24.284422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:39.122 [2024-07-21 01:36:24.284433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:39.122 [2024-07-21 01:36:24.284602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:39.122 [2024-07-21 01:36:24.284612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cb75ed14-a64f-403e-9d6e-3604ef5cc910 00:26:39.122 [2024-07-21 01:36:24.284624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:39.122 [2024-07-21 01:36:24.284643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:39.122 [2024-07-21 01:36:24.284654] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:39.122 [2024-07-21 01:36:24.284665] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:39.122 [2024-07-21 01:36:24.284674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:39.122 [2024-07-21 01:36:24.284685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:39.122 [2024-07-21 01:36:24.284695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:39.122 [2024-07-21 01:36:24.284704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:39.122 [2024-07-21 01:36:24.284713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:39.122 [2024-07-21 01:36:24.284733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.284745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:39.122 [2024-07-21 01:36:24.284766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:26:39.122 [2024-07-21 01:36:24.284783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.287328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.287352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:39.122 [2024-07-21 01:36:24.287363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.529 ms 00:26:39.122 [2024-07-21 01:36:24.287373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.287544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:39.122 [2024-07-21 01:36:24.287555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:39.122 [2024-07-21 01:36:24.287566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.150 ms 00:26:39.122 [2024-07-21 01:36:24.287577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.298391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.122 [2024-07-21 01:36:24.298427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:39.122 [2024-07-21 01:36:24.298456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.122 [2024-07-21 01:36:24.298467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.298498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.122 [2024-07-21 01:36:24.298509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:39.122 [2024-07-21 01:36:24.298520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.122 [2024-07-21 01:36:24.298530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.122 [2024-07-21 01:36:24.298610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.122 [2024-07-21 01:36:24.298623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:39.123 [2024-07-21 01:36:24.298634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.298652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.298671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.298682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:39.123 [2024-07-21 01:36:24.298692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.298702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.318733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.318768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:39.123 [2024-07-21 01:36:24.318782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.318793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.331478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.331520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:39.123 [2024-07-21 01:36:24.331534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.331544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.331618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.331630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:39.123 [2024-07-21 01:36:24.331641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.331651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.331708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.331719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:39.123 [2024-07-21 01:36:24.331729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.331738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.331847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.331882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:39.123 [2024-07-21 01:36:24.331900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.331910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.331952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.331965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:39.123 [2024-07-21 01:36:24.331976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.331986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.332035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.332046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:39.123 [2024-07-21 01:36:24.332061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.332071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.332123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:39.123 [2024-07-21 01:36:24.332135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:39.123 [2024-07-21 01:36:24.332145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:39.123 [2024-07-21 01:36:24.332156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:39.123 [2024-07-21 01:36:24.332304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 61.007 ms, result 0 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:39.382 Remove shared memory files 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid95000 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:39.382 00:26:39.382 real 1m11.504s 00:26:39.382 user 1m30.711s 00:26:39.382 sys 0m24.027s 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:39.382 01:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:39.382 ************************************ 00:26:39.382 END TEST ftl_upgrade_shutdown 00:26:39.382 ************************************ 00:26:39.642 01:36:24 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:39.642 01:36:24 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:39.642 01:36:24 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:26:39.642 01:36:24 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:39.642 01:36:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:39.642 ************************************ 00:26:39.642 START TEST ftl_restore_fast 00:26:39.642 ************************************ 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:39.642 * Looking for test storage... 00:26:39.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.642 01:36:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.g539HvZN0s 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95434 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95434 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 95434 ']' 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:39.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:39.643 01:36:24 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:39.902 [2024-07-21 01:36:25.034493] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:39.902 [2024-07-21 01:36:25.034647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95434 ] 00:26:39.902 [2024-07-21 01:36:25.204323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.161 [2024-07-21 01:36:25.244810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:40.730 01:36:25 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:40.989 { 00:26:40.989 "name": "nvme0n1", 00:26:40.989 "aliases": [ 00:26:40.989 "b94cfa06-3e8b-4454-aa49-ede286845814" 00:26:40.989 ], 00:26:40.989 "product_name": "NVMe disk", 00:26:40.989 "block_size": 4096, 00:26:40.989 "num_blocks": 1310720, 00:26:40.989 "uuid": "b94cfa06-3e8b-4454-aa49-ede286845814", 00:26:40.989 "assigned_rate_limits": { 00:26:40.989 "rw_ios_per_sec": 0, 00:26:40.989 "rw_mbytes_per_sec": 0, 00:26:40.989 "r_mbytes_per_sec": 0, 00:26:40.989 "w_mbytes_per_sec": 0 00:26:40.989 }, 00:26:40.989 "claimed": true, 00:26:40.989 "claim_type": "read_many_write_one", 00:26:40.989 "zoned": false, 00:26:40.989 "supported_io_types": { 00:26:40.989 "read": true, 00:26:40.989 "write": true, 00:26:40.989 "unmap": true, 00:26:40.989 "write_zeroes": true, 00:26:40.989 "flush": true, 00:26:40.989 "reset": true, 00:26:40.989 "compare": true, 00:26:40.989 "compare_and_write": false, 00:26:40.989 "abort": true, 00:26:40.989 "nvme_admin": true, 00:26:40.989 "nvme_io": true 00:26:40.989 }, 00:26:40.989 "driver_specific": { 00:26:40.989 "nvme": [ 00:26:40.989 { 00:26:40.989 "pci_address": "0000:00:11.0", 00:26:40.989 "trid": { 00:26:40.989 "trtype": "PCIe", 00:26:40.989 "traddr": "0000:00:11.0" 00:26:40.989 }, 00:26:40.989 "ctrlr_data": { 00:26:40.989 "cntlid": 0, 00:26:40.989 "vendor_id": "0x1b36", 00:26:40.989 "model_number": "QEMU NVMe Ctrl", 00:26:40.989 "serial_number": "12341", 00:26:40.989 "firmware_revision": "8.0.0", 00:26:40.989 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:40.989 "oacs": { 00:26:40.989 "security": 0, 00:26:40.989 "format": 1, 00:26:40.989 "firmware": 0, 00:26:40.989 "ns_manage": 1 00:26:40.989 }, 00:26:40.989 "multi_ctrlr": false, 00:26:40.989 "ana_reporting": false 00:26:40.989 }, 00:26:40.989 "vs": { 00:26:40.989 "nvme_version": "1.4" 00:26:40.989 }, 00:26:40.989 "ns_data": { 00:26:40.989 "id": 1, 00:26:40.989 "can_share": false 00:26:40.989 } 00:26:40.989 } 00:26:40.989 ], 00:26:40.989 "mp_policy": "active_passive" 00:26:40.989 } 00:26:40.989 } 00:26:40.989 ]' 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:40.989 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=5068b46d-b59a-41ef-b6de-99d24de40756 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:41.248 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5068b46d-b59a-41ef-b6de-99d24de40756 00:26:41.508 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:41.767 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=97845aa3-b171-4bfd-bc30-abec04ad50b2 00:26:41.767 01:36:26 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 97845aa3-b171-4bfd-bc30-abec04ad50b2 00:26:41.767 01:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b32721fc-13a1-4000-9450-3392f3b2680f 00:26:41.767 01:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:42.027 { 00:26:42.027 "name": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:42.027 "aliases": [ 00:26:42.027 "lvs/nvme0n1p0" 00:26:42.027 ], 00:26:42.027 "product_name": "Logical Volume", 00:26:42.027 "block_size": 4096, 00:26:42.027 "num_blocks": 26476544, 00:26:42.027 "uuid": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:42.027 "assigned_rate_limits": { 00:26:42.027 "rw_ios_per_sec": 0, 00:26:42.027 "rw_mbytes_per_sec": 0, 00:26:42.027 "r_mbytes_per_sec": 0, 00:26:42.027 "w_mbytes_per_sec": 0 00:26:42.027 }, 00:26:42.027 "claimed": false, 00:26:42.027 "zoned": false, 00:26:42.027 "supported_io_types": { 00:26:42.027 "read": true, 00:26:42.027 "write": true, 00:26:42.027 "unmap": true, 00:26:42.027 "write_zeroes": true, 00:26:42.027 "flush": false, 00:26:42.027 "reset": true, 00:26:42.027 "compare": false, 00:26:42.027 "compare_and_write": false, 00:26:42.027 "abort": false, 00:26:42.027 "nvme_admin": false, 00:26:42.027 "nvme_io": false 00:26:42.027 }, 00:26:42.027 "driver_specific": { 00:26:42.027 "lvol": { 00:26:42.027 "lvol_store_uuid": "97845aa3-b171-4bfd-bc30-abec04ad50b2", 00:26:42.027 "base_bdev": "nvme0n1", 00:26:42.027 "thin_provision": true, 00:26:42.027 "num_allocated_clusters": 0, 00:26:42.027 "snapshot": false, 00:26:42.027 "clone": false, 00:26:42.027 "esnap_clone": false 00:26:42.027 } 00:26:42.027 } 00:26:42.027 } 00:26:42.027 ]' 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:42.027 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:42.286 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.545 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:42.545 { 00:26:42.545 "name": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:42.545 "aliases": [ 00:26:42.545 "lvs/nvme0n1p0" 00:26:42.545 ], 00:26:42.545 "product_name": "Logical Volume", 00:26:42.545 "block_size": 4096, 00:26:42.545 "num_blocks": 26476544, 00:26:42.545 "uuid": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:42.545 "assigned_rate_limits": { 00:26:42.545 "rw_ios_per_sec": 0, 00:26:42.545 "rw_mbytes_per_sec": 0, 00:26:42.545 "r_mbytes_per_sec": 0, 00:26:42.545 "w_mbytes_per_sec": 0 00:26:42.545 }, 00:26:42.545 "claimed": false, 00:26:42.545 "zoned": false, 00:26:42.545 "supported_io_types": { 00:26:42.545 "read": true, 00:26:42.545 "write": true, 00:26:42.545 "unmap": true, 00:26:42.545 "write_zeroes": true, 00:26:42.545 "flush": false, 00:26:42.546 "reset": true, 00:26:42.546 "compare": false, 00:26:42.546 "compare_and_write": false, 00:26:42.546 "abort": false, 00:26:42.546 "nvme_admin": false, 00:26:42.546 "nvme_io": false 00:26:42.546 }, 00:26:42.546 "driver_specific": { 00:26:42.546 "lvol": { 00:26:42.546 "lvol_store_uuid": "97845aa3-b171-4bfd-bc30-abec04ad50b2", 00:26:42.546 "base_bdev": "nvme0n1", 00:26:42.546 "thin_provision": true, 00:26:42.546 "num_allocated_clusters": 0, 00:26:42.546 "snapshot": false, 00:26:42.546 "clone": false, 00:26:42.546 "esnap_clone": false 00:26:42.546 } 00:26:42.546 } 00:26:42.546 } 00:26:42.546 ]' 00:26:42.546 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:42.546 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:42.546 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:42.805 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:42.805 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:42.805 01:36:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:42.805 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:42.805 01:36:27 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=b32721fc-13a1-4000-9450-3392f3b2680f 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:26:42.805 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b32721fc-13a1-4000-9450-3392f3b2680f 00:26:43.064 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:26:43.064 { 00:26:43.064 "name": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:43.064 "aliases": [ 00:26:43.064 "lvs/nvme0n1p0" 00:26:43.064 ], 00:26:43.064 "product_name": "Logical Volume", 00:26:43.064 "block_size": 4096, 00:26:43.064 "num_blocks": 26476544, 00:26:43.064 "uuid": "b32721fc-13a1-4000-9450-3392f3b2680f", 00:26:43.064 "assigned_rate_limits": { 00:26:43.064 "rw_ios_per_sec": 0, 00:26:43.064 "rw_mbytes_per_sec": 0, 00:26:43.064 "r_mbytes_per_sec": 0, 00:26:43.064 "w_mbytes_per_sec": 0 00:26:43.064 }, 00:26:43.064 "claimed": false, 00:26:43.064 "zoned": false, 00:26:43.064 "supported_io_types": { 00:26:43.064 "read": true, 00:26:43.064 "write": true, 00:26:43.064 "unmap": true, 00:26:43.064 "write_zeroes": true, 00:26:43.064 "flush": false, 00:26:43.064 "reset": true, 00:26:43.064 "compare": false, 00:26:43.064 "compare_and_write": false, 00:26:43.064 "abort": false, 00:26:43.064 "nvme_admin": false, 00:26:43.064 "nvme_io": false 00:26:43.064 }, 00:26:43.064 "driver_specific": { 00:26:43.064 "lvol": { 00:26:43.064 "lvol_store_uuid": "97845aa3-b171-4bfd-bc30-abec04ad50b2", 00:26:43.064 "base_bdev": "nvme0n1", 00:26:43.064 "thin_provision": true, 00:26:43.064 "num_allocated_clusters": 0, 00:26:43.064 "snapshot": false, 00:26:43.065 "clone": false, 00:26:43.065 "esnap_clone": false 00:26:43.065 } 00:26:43.065 } 00:26:43.065 } 00:26:43.065 ]' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b32721fc-13a1-4000-9450-3392f3b2680f --l2p_dram_limit 10' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:43.065 01:36:28 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b32721fc-13a1-4000-9450-3392f3b2680f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:43.325 [2024-07-21 01:36:28.461020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.461077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:43.325 [2024-07-21 01:36:28.461097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:43.325 [2024-07-21 01:36:28.461109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.461179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.461191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:43.325 [2024-07-21 01:36:28.461203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:43.325 [2024-07-21 01:36:28.461217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.461244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:43.325 [2024-07-21 01:36:28.461696] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:43.325 [2024-07-21 01:36:28.461727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.461738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:43.325 [2024-07-21 01:36:28.461752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:26:43.325 [2024-07-21 01:36:28.461762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.461914] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:26:43.325 [2024-07-21 01:36:28.464262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.464296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:43.325 [2024-07-21 01:36:28.464312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:43.325 [2024-07-21 01:36:28.464328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.477466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.477497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:43.325 [2024-07-21 01:36:28.477510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.117 ms 00:26:43.325 [2024-07-21 01:36:28.477524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.477609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.477630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:43.325 [2024-07-21 01:36:28.477641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:43.325 [2024-07-21 01:36:28.477655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.477717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.477732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:43.325 [2024-07-21 01:36:28.477742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:43.325 [2024-07-21 01:36:28.477755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.477780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:43.325 [2024-07-21 01:36:28.480498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.480529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:43.325 [2024-07-21 01:36:28.480543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:26:43.325 [2024-07-21 01:36:28.480552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.480591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.325 [2024-07-21 01:36:28.480601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:43.325 [2024-07-21 01:36:28.480614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:43.325 [2024-07-21 01:36:28.480623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.325 [2024-07-21 01:36:28.480647] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:43.325 [2024-07-21 01:36:28.480806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:43.325 [2024-07-21 01:36:28.480825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:43.325 [2024-07-21 01:36:28.480838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:43.325 [2024-07-21 01:36:28.480867] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:43.325 [2024-07-21 01:36:28.480881] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:43.325 [2024-07-21 01:36:28.480895] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:43.325 [2024-07-21 01:36:28.480905] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:43.325 [2024-07-21 01:36:28.480922] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:43.325 [2024-07-21 01:36:28.480932] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:43.325 [2024-07-21 01:36:28.480946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.326 [2024-07-21 01:36:28.480956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:43.326 [2024-07-21 01:36:28.480983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:26:43.326 [2024-07-21 01:36:28.480993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.326 [2024-07-21 01:36:28.481065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.326 [2024-07-21 01:36:28.481076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:43.326 [2024-07-21 01:36:28.481093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:43.326 [2024-07-21 01:36:28.481103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.326 [2024-07-21 01:36:28.481198] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:43.326 [2024-07-21 01:36:28.481212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:43.326 [2024-07-21 01:36:28.481225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:43.326 [2024-07-21 01:36:28.481269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:43.326 [2024-07-21 01:36:28.481304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:43.326 [2024-07-21 01:36:28.481327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:43.326 [2024-07-21 01:36:28.481336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:43.326 [2024-07-21 01:36:28.481348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:43.326 [2024-07-21 01:36:28.481359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:43.326 [2024-07-21 01:36:28.481375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:43.326 [2024-07-21 01:36:28.481385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:43.326 [2024-07-21 01:36:28.481406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:43.326 [2024-07-21 01:36:28.481440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:43.326 [2024-07-21 01:36:28.481469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:43.326 [2024-07-21 01:36:28.481502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:43.326 [2024-07-21 01:36:28.481532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:43.326 [2024-07-21 01:36:28.481569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:43.326 [2024-07-21 01:36:28.481589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:43.326 [2024-07-21 01:36:28.481598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:43.326 [2024-07-21 01:36:28.481610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:43.326 [2024-07-21 01:36:28.481620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:43.326 [2024-07-21 01:36:28.481634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:43.326 [2024-07-21 01:36:28.481642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:43.326 [2024-07-21 01:36:28.481662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:43.326 [2024-07-21 01:36:28.481674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481682] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:43.326 [2024-07-21 01:36:28.481695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:43.326 [2024-07-21 01:36:28.481706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:43.326 [2024-07-21 01:36:28.481738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:43.326 [2024-07-21 01:36:28.481750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:43.326 [2024-07-21 01:36:28.481759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:43.326 [2024-07-21 01:36:28.481772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:43.326 [2024-07-21 01:36:28.481781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:43.326 [2024-07-21 01:36:28.481794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:43.326 [2024-07-21 01:36:28.481807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:43.326 [2024-07-21 01:36:28.481834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.481850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:43.326 [2024-07-21 01:36:28.481863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:43.326 [2024-07-21 01:36:28.481873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:43.326 [2024-07-21 01:36:28.481887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:43.326 [2024-07-21 01:36:28.481897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:43.326 [2024-07-21 01:36:28.481911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:43.326 [2024-07-21 01:36:28.481921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:43.326 [2024-07-21 01:36:28.481949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:43.326 [2024-07-21 01:36:28.481959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:43.326 [2024-07-21 01:36:28.481972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.481982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.481994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.482004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.482016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:43.326 [2024-07-21 01:36:28.482026] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:43.326 [2024-07-21 01:36:28.482039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.482050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:43.326 [2024-07-21 01:36:28.482062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:43.326 [2024-07-21 01:36:28.482072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:43.326 [2024-07-21 01:36:28.482084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:43.326 [2024-07-21 01:36:28.482094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.326 [2024-07-21 01:36:28.482107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:43.326 [2024-07-21 01:36:28.482123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:26:43.326 [2024-07-21 01:36:28.482140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.326 [2024-07-21 01:36:28.482195] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:43.326 [2024-07-21 01:36:28.482211] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:47.519 [2024-07-21 01:36:32.406719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.406808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:47.519 [2024-07-21 01:36:32.406847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3930.890 ms 00:26:47.519 [2024-07-21 01:36:32.406861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.426138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.426197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.519 [2024-07-21 01:36:32.426215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.184 ms 00:26:47.519 [2024-07-21 01:36:32.426230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.426344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.426369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:47.519 [2024-07-21 01:36:32.426381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:47.519 [2024-07-21 01:36:32.426395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.442893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.442940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.519 [2024-07-21 01:36:32.442955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.479 ms 00:26:47.519 [2024-07-21 01:36:32.442969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.443016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.443032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.519 [2024-07-21 01:36:32.443043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:47.519 [2024-07-21 01:36:32.443056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.443827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.443863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.519 [2024-07-21 01:36:32.443875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:26:47.519 [2024-07-21 01:36:32.443889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.443991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.444014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.519 [2024-07-21 01:36:32.444026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:47.519 [2024-07-21 01:36:32.444039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.455718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.455758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.519 [2024-07-21 01:36:32.455771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.676 ms 00:26:47.519 [2024-07-21 01:36:32.455784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.464870] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:47.519 [2024-07-21 01:36:32.469842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.469872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:47.519 [2024-07-21 01:36:32.469888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.989 ms 00:26:47.519 [2024-07-21 01:36:32.469911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.566699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.566763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:47.519 [2024-07-21 01:36:32.566786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.906 ms 00:26:47.519 [2024-07-21 01:36:32.566797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.567009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.567022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:47.519 [2024-07-21 01:36:32.567037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:26:47.519 [2024-07-21 01:36:32.567048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.570881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.570917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:47.519 [2024-07-21 01:36:32.570934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.812 ms 00:26:47.519 [2024-07-21 01:36:32.570948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.573933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.573965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:47.519 [2024-07-21 01:36:32.573982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:26:47.519 [2024-07-21 01:36:32.573992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.574274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.574288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:47.519 [2024-07-21 01:36:32.574302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:26:47.519 [2024-07-21 01:36:32.574312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.621851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.621888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:47.519 [2024-07-21 01:36:32.621906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.564 ms 00:26:47.519 [2024-07-21 01:36:32.621920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.627284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.627316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:47.519 [2024-07-21 01:36:32.627332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.329 ms 00:26:47.519 [2024-07-21 01:36:32.627343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.630522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.630554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:47.519 [2024-07-21 01:36:32.630568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:26:47.519 [2024-07-21 01:36:32.630578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.634381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.634413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:47.519 [2024-07-21 01:36:32.634428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.770 ms 00:26:47.519 [2024-07-21 01:36:32.634438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.634486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.634498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:47.519 [2024-07-21 01:36:32.634512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:47.519 [2024-07-21 01:36:32.634522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.634596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.519 [2024-07-21 01:36:32.634608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:47.519 [2024-07-21 01:36:32.634622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:47.519 [2024-07-21 01:36:32.634632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.519 [2024-07-21 01:36:32.636025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4181.287 ms, result 0 00:26:47.519 { 00:26:47.519 "name": "ftl0", 00:26:47.519 "uuid": "22795033-ca25-42d0-8831-7d3fb7d5cdbf" 00:26:47.519 } 00:26:47.519 01:36:32 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:47.519 01:36:32 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:47.780 01:36:32 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:47.780 01:36:32 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:47.780 [2024-07-21 01:36:33.001472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.001521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.780 [2024-07-21 01:36:33.001535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.780 [2024-07-21 01:36:33.001553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.001578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.780 [2024-07-21 01:36:33.002734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.002763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.780 [2024-07-21 01:36:33.002780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:26:47.780 [2024-07-21 01:36:33.002790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.003010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.003022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.780 [2024-07-21 01:36:33.003036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:26:47.780 [2024-07-21 01:36:33.003045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.005492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.005514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.780 [2024-07-21 01:36:33.005527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:26:47.780 [2024-07-21 01:36:33.005537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.010279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.010316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:47.780 [2024-07-21 01:36:33.010332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:26:47.780 [2024-07-21 01:36:33.010341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.011915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.011949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.780 [2024-07-21 01:36:33.011968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:26:47.780 [2024-07-21 01:36:33.011978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.017682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.017717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:47.780 [2024-07-21 01:36:33.017733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.674 ms 00:26:47.780 [2024-07-21 01:36:33.017751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.017934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.017951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:47.780 [2024-07-21 01:36:33.017966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:47.780 [2024-07-21 01:36:33.017979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.020158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.020190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:47.780 [2024-07-21 01:36:33.020205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:26:47.780 [2024-07-21 01:36:33.020214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.021690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.021723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:47.780 [2024-07-21 01:36:33.021744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:26:47.780 [2024-07-21 01:36:33.021754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.023028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.023057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:47.780 [2024-07-21 01:36:33.023072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:26:47.780 [2024-07-21 01:36:33.023081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.024256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.780 [2024-07-21 01:36:33.024286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:47.780 [2024-07-21 01:36:33.024300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:26:47.780 [2024-07-21 01:36:33.024310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.780 [2024-07-21 01:36:33.024342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:47.780 [2024-07-21 01:36:33.024360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:47.780 [2024-07-21 01:36:33.024767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.024991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:47.781 [2024-07-21 01:36:33.025659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:47.781 [2024-07-21 01:36:33.025676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:26:47.781 [2024-07-21 01:36:33.025688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:47.781 [2024-07-21 01:36:33.025701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:47.781 [2024-07-21 01:36:33.025710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:47.781 [2024-07-21 01:36:33.025724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:47.781 [2024-07-21 01:36:33.025733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:47.781 [2024-07-21 01:36:33.025746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:47.781 [2024-07-21 01:36:33.025759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:47.781 [2024-07-21 01:36:33.025772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:47.781 [2024-07-21 01:36:33.025781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:47.781 [2024-07-21 01:36:33.025793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.781 [2024-07-21 01:36:33.025803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:47.781 [2024-07-21 01:36:33.025816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:26:47.781 [2024-07-21 01:36:33.025836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.781 [2024-07-21 01:36:33.028487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.781 [2024-07-21 01:36:33.028508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:47.781 [2024-07-21 01:36:33.028524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:26:47.781 [2024-07-21 01:36:33.028534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.781 [2024-07-21 01:36:33.028691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.781 [2024-07-21 01:36:33.028701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:47.781 [2024-07-21 01:36:33.028714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:26:47.781 [2024-07-21 01:36:33.028733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.781 [2024-07-21 01:36:33.039046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.781 [2024-07-21 01:36:33.039076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.781 [2024-07-21 01:36:33.039091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.781 [2024-07-21 01:36:33.039105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.781 [2024-07-21 01:36:33.039157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.781 [2024-07-21 01:36:33.039168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.781 [2024-07-21 01:36:33.039181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.781 [2024-07-21 01:36:33.039191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.781 [2024-07-21 01:36:33.039266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.781 [2024-07-21 01:36:33.039279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.781 [2024-07-21 01:36:33.039295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.039304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.039328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.039338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.782 [2024-07-21 01:36:33.039351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.039361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.057298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.057342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.782 [2024-07-21 01:36:33.057358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.057368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.069788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.069868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.782 [2024-07-21 01:36:33.069885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.069896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.069997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.782 [2024-07-21 01:36:33.070027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.782 [2024-07-21 01:36:33.070134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.782 [2024-07-21 01:36:33.070273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:47.782 [2024-07-21 01:36:33.070358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.782 [2024-07-21 01:36:33.070466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:47.782 [2024-07-21 01:36:33.070553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.782 [2024-07-21 01:36:33.070566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:47.782 [2024-07-21 01:36:33.070577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.782 [2024-07-21 01:36:33.070761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.334 ms, result 0 00:26:47.782 true 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95434 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95434 ']' 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95434 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95434 00:26:48.041 killing process with pid 95434 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95434' 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 95434 00:26:48.041 01:36:33 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 95434 00:26:51.329 01:36:36 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:55.548 262144+0 records in 00:26:55.548 262144+0 records out 00:26:55.548 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.92944 s, 273 MB/s 00:26:55.548 01:36:40 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:56.927 01:36:42 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:56.927 [2024-07-21 01:36:42.147423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:56.927 [2024-07-21 01:36:42.147569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95638 ] 00:26:57.186 [2024-07-21 01:36:42.320819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.186 [2024-07-21 01:36:42.396689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.447 [2024-07-21 01:36:42.545320] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:57.448 [2024-07-21 01:36:42.545411] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:57.448 [2024-07-21 01:36:42.699073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.699158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:57.448 [2024-07-21 01:36:42.699175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:57.448 [2024-07-21 01:36:42.699185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.699250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.699265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.448 [2024-07-21 01:36:42.699276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:57.448 [2024-07-21 01:36:42.699290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.699312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:57.448 [2024-07-21 01:36:42.699546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:57.448 [2024-07-21 01:36:42.699570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.699590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.448 [2024-07-21 01:36:42.699601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:26:57.448 [2024-07-21 01:36:42.699610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.701045] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:57.448 [2024-07-21 01:36:42.703736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.703781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:57.448 [2024-07-21 01:36:42.703799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:26:57.448 [2024-07-21 01:36:42.703809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.703884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.703898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:57.448 [2024-07-21 01:36:42.703909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:57.448 [2024-07-21 01:36:42.703920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.710588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.710628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.448 [2024-07-21 01:36:42.710640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.621 ms 00:26:57.448 [2024-07-21 01:36:42.710650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.710743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.710756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.448 [2024-07-21 01:36:42.710766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:57.448 [2024-07-21 01:36:42.710776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.710862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.710886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:57.448 [2024-07-21 01:36:42.710900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.448 [2024-07-21 01:36:42.710909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.710937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:57.448 [2024-07-21 01:36:42.712534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.712563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.448 [2024-07-21 01:36:42.712574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:26:57.448 [2024-07-21 01:36:42.712584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.712616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.712627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:57.448 [2024-07-21 01:36:42.712641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:57.448 [2024-07-21 01:36:42.712650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.712673] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:57.448 [2024-07-21 01:36:42.712696] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:57.448 [2024-07-21 01:36:42.712739] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:57.448 [2024-07-21 01:36:42.712765] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:57.448 [2024-07-21 01:36:42.712863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:57.448 [2024-07-21 01:36:42.712881] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:57.448 [2024-07-21 01:36:42.712897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:57.448 [2024-07-21 01:36:42.712911] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:57.448 [2024-07-21 01:36:42.712923] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:57.448 [2024-07-21 01:36:42.712934] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:57.448 [2024-07-21 01:36:42.712945] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:57.448 [2024-07-21 01:36:42.712954] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:57.448 [2024-07-21 01:36:42.712971] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:57.448 [2024-07-21 01:36:42.712982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.712992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:57.448 [2024-07-21 01:36:42.713002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:26:57.448 [2024-07-21 01:36:42.713015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.713090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.448 [2024-07-21 01:36:42.713101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:57.448 [2024-07-21 01:36:42.713111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:57.448 [2024-07-21 01:36:42.713121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.448 [2024-07-21 01:36:42.713207] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:57.448 [2024-07-21 01:36:42.713219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:57.448 [2024-07-21 01:36:42.713230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:57.448 [2024-07-21 01:36:42.713263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:57.448 [2024-07-21 01:36:42.713292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.448 [2024-07-21 01:36:42.713310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:57.448 [2024-07-21 01:36:42.713320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:57.448 [2024-07-21 01:36:42.713339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:57.448 [2024-07-21 01:36:42.713349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:57.448 [2024-07-21 01:36:42.713358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:57.448 [2024-07-21 01:36:42.713368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:57.448 [2024-07-21 01:36:42.713389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:57.448 [2024-07-21 01:36:42.713418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:57.448 [2024-07-21 01:36:42.713446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:57.448 [2024-07-21 01:36:42.713474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:57.448 [2024-07-21 01:36:42.713500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:57.448 [2024-07-21 01:36:42.713519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:57.448 [2024-07-21 01:36:42.713535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.448 [2024-07-21 01:36:42.713553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:57.448 [2024-07-21 01:36:42.713562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:57.448 [2024-07-21 01:36:42.713571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:57.448 [2024-07-21 01:36:42.713580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:57.448 [2024-07-21 01:36:42.713589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:57.448 [2024-07-21 01:36:42.713598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:57.448 [2024-07-21 01:36:42.713616] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:57.448 [2024-07-21 01:36:42.713626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.448 [2024-07-21 01:36:42.713635] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:57.448 [2024-07-21 01:36:42.713646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:57.449 [2024-07-21 01:36:42.713656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:57.449 [2024-07-21 01:36:42.713665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:57.449 [2024-07-21 01:36:42.713674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:57.449 [2024-07-21 01:36:42.713686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:57.449 [2024-07-21 01:36:42.713696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:57.449 [2024-07-21 01:36:42.713705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:57.449 [2024-07-21 01:36:42.713715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:57.449 [2024-07-21 01:36:42.713724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:57.449 [2024-07-21 01:36:42.713735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:57.449 [2024-07-21 01:36:42.713753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.713764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:57.449 [2024-07-21 01:36:42.713774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:57.449 [2024-07-21 01:36:42.713785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:57.449 [2024-07-21 01:36:42.713795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:57.449 [2024-07-21 01:36:42.713805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:57.449 [2024-07-21 01:36:42.713815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:57.449 [2024-07-21 01:36:42.714140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:57.449 [2024-07-21 01:36:42.714208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:57.449 [2024-07-21 01:36:42.714257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:57.449 [2024-07-21 01:36:42.714310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:57.449 [2024-07-21 01:36:42.714615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:57.449 [2024-07-21 01:36:42.714665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.449 [2024-07-21 01:36:42.714889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:57.449 [2024-07-21 01:36:42.714939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:57.449 [2024-07-21 01:36:42.714988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:57.449 [2024-07-21 01:36:42.715038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.715151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:57.449 [2024-07-21 01:36:42.715188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:26:57.449 [2024-07-21 01:36:42.715218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.737263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.737520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.449 [2024-07-21 01:36:42.737661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.986 ms 00:26:57.449 [2024-07-21 01:36:42.737679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.737779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.737792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:57.449 [2024-07-21 01:36:42.737803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:57.449 [2024-07-21 01:36:42.737819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.749441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.749509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.449 [2024-07-21 01:36:42.749528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.541 ms 00:26:57.449 [2024-07-21 01:36:42.749544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.749596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.749612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.449 [2024-07-21 01:36:42.749627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:57.449 [2024-07-21 01:36:42.749646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.750157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.750172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.449 [2024-07-21 01:36:42.750183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:26:57.449 [2024-07-21 01:36:42.750193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.449 [2024-07-21 01:36:42.750310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.449 [2024-07-21 01:36:42.750327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.449 [2024-07-21 01:36:42.750338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:57.449 [2024-07-21 01:36:42.750347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.756156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.756196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.709 [2024-07-21 01:36:42.756217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.794 ms 00:26:57.709 [2024-07-21 01:36:42.756227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.758804] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:57.709 [2024-07-21 01:36:42.758852] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:57.709 [2024-07-21 01:36:42.758867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.758878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:57.709 [2024-07-21 01:36:42.758889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:26:57.709 [2024-07-21 01:36:42.758899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.771759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.771806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:57.709 [2024-07-21 01:36:42.771819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.843 ms 00:26:57.709 [2024-07-21 01:36:42.771844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.773735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.773771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:57.709 [2024-07-21 01:36:42.773782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:26:57.709 [2024-07-21 01:36:42.773792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.775332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.775363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:57.709 [2024-07-21 01:36:42.775375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:26:57.709 [2024-07-21 01:36:42.775385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.775683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.775700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:57.709 [2024-07-21 01:36:42.775711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:26:57.709 [2024-07-21 01:36:42.775721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.797555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.797631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:57.709 [2024-07-21 01:36:42.797648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.843 ms 00:26:57.709 [2024-07-21 01:36:42.797658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.803972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:57.709 [2024-07-21 01:36:42.807159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.807196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:57.709 [2024-07-21 01:36:42.807209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.462 ms 00:26:57.709 [2024-07-21 01:36:42.807220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.807305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.807318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:57.709 [2024-07-21 01:36:42.807329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:57.709 [2024-07-21 01:36:42.807339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.807408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.807420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:57.709 [2024-07-21 01:36:42.807436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:57.709 [2024-07-21 01:36:42.807446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.807467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.807478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:57.709 [2024-07-21 01:36:42.807488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:57.709 [2024-07-21 01:36:42.807498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.807532] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:57.709 [2024-07-21 01:36:42.807544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.807553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:57.709 [2024-07-21 01:36:42.807563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:57.709 [2024-07-21 01:36:42.807576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.811363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.811396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:57.709 [2024-07-21 01:36:42.811419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:26:57.709 [2024-07-21 01:36:42.811428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.811496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.709 [2024-07-21 01:36:42.811516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:57.709 [2024-07-21 01:36:42.811527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:57.709 [2024-07-21 01:36:42.811537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.709 [2024-07-21 01:36:42.812627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.304 ms, result 0 00:27:40.986  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (22 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (22 MBps) Copying: 184/1024 [MB] (22 MBps) Copying: 207/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (23 MBps) Copying: 255/1024 [MB] (24 MBps) Copying: 279/1024 [MB] (24 MBps) Copying: 302/1024 [MB] (23 MBps) Copying: 325/1024 [MB] (22 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (24 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 422/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (25 MBps) Copying: 471/1024 [MB] (24 MBps) Copying: 495/1024 [MB] (24 MBps) Copying: 520/1024 [MB] (24 MBps) Copying: 545/1024 [MB] (24 MBps) Copying: 569/1024 [MB] (24 MBps) Copying: 594/1024 [MB] (25 MBps) Copying: 619/1024 [MB] (24 MBps) Copying: 642/1024 [MB] (23 MBps) Copying: 666/1024 [MB] (23 MBps) Copying: 688/1024 [MB] (22 MBps) Copying: 712/1024 [MB] (24 MBps) Copying: 736/1024 [MB] (23 MBps) Copying: 761/1024 [MB] (24 MBps) Copying: 783/1024 [MB] (22 MBps) Copying: 807/1024 [MB] (23 MBps) Copying: 831/1024 [MB] (23 MBps) Copying: 853/1024 [MB] (22 MBps) Copying: 876/1024 [MB] (22 MBps) Copying: 899/1024 [MB] (22 MBps) Copying: 922/1024 [MB] (23 MBps) Copying: 945/1024 [MB] (23 MBps) Copying: 968/1024 [MB] (22 MBps) Copying: 989/1024 [MB] (21 MBps) Copying: 1012/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-21 01:37:26.235373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.986 [2024-07-21 01:37:26.235443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:40.986 [2024-07-21 01:37:26.235461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:40.986 [2024-07-21 01:37:26.235477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.986 [2024-07-21 01:37:26.235499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:40.986 [2024-07-21 01:37:26.236781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.986 [2024-07-21 01:37:26.236805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:40.986 [2024-07-21 01:37:26.236816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:27:40.986 [2024-07-21 01:37:26.236826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.986 [2024-07-21 01:37:26.238765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.986 [2024-07-21 01:37:26.238807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:40.986 [2024-07-21 01:37:26.238820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:27:40.986 [2024-07-21 01:37:26.238849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.986 [2024-07-21 01:37:26.238884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.986 [2024-07-21 01:37:26.238898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:40.987 [2024-07-21 01:37:26.238909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:40.987 [2024-07-21 01:37:26.238919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.987 [2024-07-21 01:37:26.238981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.987 [2024-07-21 01:37:26.238993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:40.987 [2024-07-21 01:37:26.239003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:40.987 [2024-07-21 01:37:26.239013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.987 [2024-07-21 01:37:26.239028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:40.987 [2024-07-21 01:37:26.239055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.239998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:40.987 [2024-07-21 01:37:26.240008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:40.988 [2024-07-21 01:37:26.240132] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:40.988 [2024-07-21 01:37:26.240149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:27:40.988 [2024-07-21 01:37:26.240160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:40.988 [2024-07-21 01:37:26.240169] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:40.988 [2024-07-21 01:37:26.240178] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:40.988 [2024-07-21 01:37:26.240187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:40.988 [2024-07-21 01:37:26.240196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:40.988 [2024-07-21 01:37:26.240206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:40.988 [2024-07-21 01:37:26.240221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:40.988 [2024-07-21 01:37:26.240230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:40.988 [2024-07-21 01:37:26.240238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:40.988 [2024-07-21 01:37:26.240248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.988 [2024-07-21 01:37:26.240258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:40.988 [2024-07-21 01:37:26.240271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:27:40.988 [2024-07-21 01:37:26.240280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.242855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.988 [2024-07-21 01:37:26.242877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:40.988 [2024-07-21 01:37:26.242888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:27:40.988 [2024-07-21 01:37:26.242897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.243065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.988 [2024-07-21 01:37:26.243080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:40.988 [2024-07-21 01:37:26.243091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:27:40.988 [2024-07-21 01:37:26.243100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.252628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.252769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.988 [2024-07-21 01:37:26.252914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.252952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.253032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.253071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.988 [2024-07-21 01:37:26.253100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.253181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.253276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.253313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.988 [2024-07-21 01:37:26.253350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.253388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.253531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.253561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.988 [2024-07-21 01:37:26.253595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.253623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.272028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.272198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.988 [2024-07-21 01:37:26.272275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.272310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.286014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.286183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.988 [2024-07-21 01:37:26.286275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.286310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.286388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.286418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.988 [2024-07-21 01:37:26.286446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.286532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.286634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.286675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.988 [2024-07-21 01:37:26.286705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.286739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.286835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.286923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.988 [2024-07-21 01:37:26.286954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.286982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.287039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.287080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:40.988 [2024-07-21 01:37:26.287161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.287271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.287372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.287448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.988 [2024-07-21 01:37:26.287484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.287549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.287631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:40.988 [2024-07-21 01:37:26.287675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.988 [2024-07-21 01:37:26.287753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:40.988 [2024-07-21 01:37:26.287870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.988 [2024-07-21 01:37:26.288059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 52.717 ms, result 0 00:27:41.923 00:27:41.923 00:27:41.923 01:37:27 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:41.923 [2024-07-21 01:37:27.210357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:41.923 [2024-07-21 01:37:27.210690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96098 ] 00:27:42.182 [2024-07-21 01:37:27.380006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.182 [2024-07-21 01:37:27.448942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.441 [2024-07-21 01:37:27.597210] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.441 [2024-07-21 01:37:27.597288] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:42.702 [2024-07-21 01:37:27.752096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.752149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:42.702 [2024-07-21 01:37:27.752168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:42.702 [2024-07-21 01:37:27.752179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.752253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.752267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.702 [2024-07-21 01:37:27.752278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:42.702 [2024-07-21 01:37:27.752292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.752322] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:42.702 [2024-07-21 01:37:27.752532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:42.702 [2024-07-21 01:37:27.752552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.752566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.702 [2024-07-21 01:37:27.752577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:27:42.702 [2024-07-21 01:37:27.752588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.752925] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:42.702 [2024-07-21 01:37:27.752949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.752961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:42.702 [2024-07-21 01:37:27.752973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:42.702 [2024-07-21 01:37:27.752996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.753110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.753127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:42.702 [2024-07-21 01:37:27.753139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:42.702 [2024-07-21 01:37:27.753149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.753533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.753554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.702 [2024-07-21 01:37:27.753565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:27:42.702 [2024-07-21 01:37:27.753579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.753677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.753691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.702 [2024-07-21 01:37:27.753701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:42.702 [2024-07-21 01:37:27.753719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.753748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.753761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:42.702 [2024-07-21 01:37:27.753771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:42.702 [2024-07-21 01:37:27.753785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.753810] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:42.702 [2024-07-21 01:37:27.756921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.757052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.702 [2024-07-21 01:37:27.757135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:27:42.702 [2024-07-21 01:37:27.757172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.757234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.757270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:42.702 [2024-07-21 01:37:27.757302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:42.702 [2024-07-21 01:37:27.757393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.757467] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:42.702 [2024-07-21 01:37:27.757517] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:42.702 [2024-07-21 01:37:27.757593] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:42.702 [2024-07-21 01:37:27.757734] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:42.702 [2024-07-21 01:37:27.757895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:42.702 [2024-07-21 01:37:27.758019] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:42.702 [2024-07-21 01:37:27.758071] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:42.702 [2024-07-21 01:37:27.758126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:42.702 [2024-07-21 01:37:27.758257] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:42.702 [2024-07-21 01:37:27.758445] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:42.702 [2024-07-21 01:37:27.758478] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:42.702 [2024-07-21 01:37:27.758508] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:42.702 [2024-07-21 01:37:27.758537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:42.702 [2024-07-21 01:37:27.758661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.758706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:42.702 [2024-07-21 01:37:27.758753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:27:42.702 [2024-07-21 01:37:27.758783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.758890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.702 [2024-07-21 01:37:27.758909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:42.702 [2024-07-21 01:37:27.758921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:42.702 [2024-07-21 01:37:27.758939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.702 [2024-07-21 01:37:27.759024] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:42.702 [2024-07-21 01:37:27.759038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:42.702 [2024-07-21 01:37:27.759051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:42.702 [2024-07-21 01:37:27.759082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:42.702 [2024-07-21 01:37:27.759118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.702 [2024-07-21 01:37:27.759137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:42.702 [2024-07-21 01:37:27.759146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:42.702 [2024-07-21 01:37:27.759156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:42.702 [2024-07-21 01:37:27.759166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:42.702 [2024-07-21 01:37:27.759185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:42.702 [2024-07-21 01:37:27.759195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:42.702 [2024-07-21 01:37:27.759214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:42.702 [2024-07-21 01:37:27.759245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:42.702 [2024-07-21 01:37:27.759278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:42.702 [2024-07-21 01:37:27.759308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:42.702 [2024-07-21 01:37:27.759340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:42.702 [2024-07-21 01:37:27.759358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:42.702 [2024-07-21 01:37:27.759367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.702 [2024-07-21 01:37:27.759388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:42.702 [2024-07-21 01:37:27.759398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:42.702 [2024-07-21 01:37:27.759409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:42.702 [2024-07-21 01:37:27.759418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:42.702 [2024-07-21 01:37:27.759429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:42.702 [2024-07-21 01:37:27.759463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.702 [2024-07-21 01:37:27.759473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:42.702 [2024-07-21 01:37:27.759483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:42.702 [2024-07-21 01:37:27.759500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.703 [2024-07-21 01:37:27.759510] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:42.703 [2024-07-21 01:37:27.759530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:42.703 [2024-07-21 01:37:27.759544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:42.703 [2024-07-21 01:37:27.759555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:42.703 [2024-07-21 01:37:27.759565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:42.703 [2024-07-21 01:37:27.759574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:42.703 [2024-07-21 01:37:27.759584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:42.703 [2024-07-21 01:37:27.759594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:42.703 [2024-07-21 01:37:27.759604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:42.703 [2024-07-21 01:37:27.759613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:42.703 [2024-07-21 01:37:27.759625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:42.703 [2024-07-21 01:37:27.759639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:42.703 [2024-07-21 01:37:27.759666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:42.703 [2024-07-21 01:37:27.759677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:42.703 [2024-07-21 01:37:27.759688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:42.703 [2024-07-21 01:37:27.759699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:42.703 [2024-07-21 01:37:27.759710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:42.703 [2024-07-21 01:37:27.759720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:42.703 [2024-07-21 01:37:27.759731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:42.703 [2024-07-21 01:37:27.759741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:42.703 [2024-07-21 01:37:27.759751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:42.703 [2024-07-21 01:37:27.759805] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:42.703 [2024-07-21 01:37:27.759823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:42.703 [2024-07-21 01:37:27.759862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:42.703 [2024-07-21 01:37:27.759872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:42.703 [2024-07-21 01:37:27.759882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:42.703 [2024-07-21 01:37:27.759893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.759905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:42.703 [2024-07-21 01:37:27.759916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:27:42.703 [2024-07-21 01:37:27.759926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.787401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.787496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.703 [2024-07-21 01:37:27.787567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.466 ms 00:27:42.703 [2024-07-21 01:37:27.787601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.787876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.787915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:42.703 [2024-07-21 01:37:27.787951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:27:42.703 [2024-07-21 01:37:27.787983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.810424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.810467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.703 [2024-07-21 01:37:27.810486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.318 ms 00:27:42.703 [2024-07-21 01:37:27.810502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.810545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.810566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.703 [2024-07-21 01:37:27.810582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:42.703 [2024-07-21 01:37:27.810606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.810758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.810776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.703 [2024-07-21 01:37:27.810791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:42.703 [2024-07-21 01:37:27.810806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.810986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.811006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.703 [2024-07-21 01:37:27.811022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:42.703 [2024-07-21 01:37:27.811035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.821489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.821524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.703 [2024-07-21 01:37:27.821545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.442 ms 00:27:42.703 [2024-07-21 01:37:27.821555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.821693] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:42.703 [2024-07-21 01:37:27.821714] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:42.703 [2024-07-21 01:37:27.821727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.821738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:42.703 [2024-07-21 01:37:27.821760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:42.703 [2024-07-21 01:37:27.821770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.831899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.831928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:42.703 [2024-07-21 01:37:27.831941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.127 ms 00:27:42.703 [2024-07-21 01:37:27.831951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.832060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.832084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:42.703 [2024-07-21 01:37:27.832107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:42.703 [2024-07-21 01:37:27.832121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.832169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.832181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:42.703 [2024-07-21 01:37:27.832190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:42.703 [2024-07-21 01:37:27.832209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.832460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.832481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:42.703 [2024-07-21 01:37:27.832499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:27:42.703 [2024-07-21 01:37:27.832508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.832527] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:42.703 [2024-07-21 01:37:27.832549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.832560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:42.703 [2024-07-21 01:37:27.832577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:42.703 [2024-07-21 01:37:27.832589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.840665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:42.703 [2024-07-21 01:37:27.840976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.841011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:42.703 [2024-07-21 01:37:27.841029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.378 ms 00:27:42.703 [2024-07-21 01:37:27.841043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.843131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.843164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:42.703 [2024-07-21 01:37:27.843176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:27:42.703 [2024-07-21 01:37:27.843185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.843264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.843275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:42.703 [2024-07-21 01:37:27.843286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:42.703 [2024-07-21 01:37:27.843299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.843342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.703 [2024-07-21 01:37:27.843353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:42.703 [2024-07-21 01:37:27.843363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:42.703 [2024-07-21 01:37:27.843373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.703 [2024-07-21 01:37:27.843410] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:42.703 [2024-07-21 01:37:27.843421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.704 [2024-07-21 01:37:27.843431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:42.704 [2024-07-21 01:37:27.843441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:42.704 [2024-07-21 01:37:27.843454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.704 [2024-07-21 01:37:27.848628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.704 [2024-07-21 01:37:27.848662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:42.704 [2024-07-21 01:37:27.848683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:27:42.704 [2024-07-21 01:37:27.848693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.704 [2024-07-21 01:37:27.848775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.704 [2024-07-21 01:37:27.848804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:42.704 [2024-07-21 01:37:27.848815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:42.704 [2024-07-21 01:37:27.848825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.704 [2024-07-21 01:37:27.850229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.784 ms, result 0 00:28:24.767  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 121/1024 [MB] (24 MBps) Copying: 146/1024 [MB] (24 MBps) Copying: 170/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (24 MBps) Copying: 219/1024 [MB] (25 MBps) Copying: 245/1024 [MB] (25 MBps) Copying: 270/1024 [MB] (25 MBps) Copying: 294/1024 [MB] (24 MBps) Copying: 318/1024 [MB] (23 MBps) Copying: 343/1024 [MB] (24 MBps) Copying: 368/1024 [MB] (24 MBps) Copying: 393/1024 [MB] (25 MBps) Copying: 418/1024 [MB] (25 MBps) Copying: 443/1024 [MB] (24 MBps) Copying: 468/1024 [MB] (25 MBps) Copying: 493/1024 [MB] (24 MBps) Copying: 518/1024 [MB] (24 MBps) Copying: 542/1024 [MB] (24 MBps) Copying: 567/1024 [MB] (24 MBps) Copying: 592/1024 [MB] (24 MBps) Copying: 617/1024 [MB] (25 MBps) Copying: 641/1024 [MB] (24 MBps) Copying: 664/1024 [MB] (23 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 713/1024 [MB] (24 MBps) Copying: 738/1024 [MB] (24 MBps) Copying: 765/1024 [MB] (26 MBps) Copying: 789/1024 [MB] (24 MBps) Copying: 813/1024 [MB] (23 MBps) Copying: 836/1024 [MB] (23 MBps) Copying: 860/1024 [MB] (23 MBps) Copying: 884/1024 [MB] (24 MBps) Copying: 908/1024 [MB] (24 MBps) Copying: 932/1024 [MB] (24 MBps) Copying: 957/1024 [MB] (24 MBps) Copying: 981/1024 [MB] (24 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-21 01:38:09.961924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.767 [2024-07-21 01:38:09.962024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:24.767 [2024-07-21 01:38:09.962058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:24.767 [2024-07-21 01:38:09.962080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.767 [2024-07-21 01:38:09.962135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:24.767 [2024-07-21 01:38:09.963997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.767 [2024-07-21 01:38:09.964192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:24.767 [2024-07-21 01:38:09.964331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:28:24.767 [2024-07-21 01:38:09.964487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.767 [2024-07-21 01:38:09.964912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.767 [2024-07-21 01:38:09.964940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:24.767 [2024-07-21 01:38:09.964963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:28:24.767 [2024-07-21 01:38:09.964982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.767 [2024-07-21 01:38:09.965066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.767 [2024-07-21 01:38:09.965088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:24.767 [2024-07-21 01:38:09.965110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:24.767 [2024-07-21 01:38:09.965130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.767 [2024-07-21 01:38:09.965223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.767 [2024-07-21 01:38:09.965245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:24.767 [2024-07-21 01:38:09.965265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:24.767 [2024-07-21 01:38:09.965283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.767 [2024-07-21 01:38:09.965313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:24.767 [2024-07-21 01:38:09.965340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.965811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.966992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.967982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:24.767 [2024-07-21 01:38:09.968466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.968994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:24.768 [2024-07-21 01:38:09.969431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:24.768 [2024-07-21 01:38:09.969453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:28:24.768 [2024-07-21 01:38:09.969477] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:24.768 [2024-07-21 01:38:09.969499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:24.768 [2024-07-21 01:38:09.969520] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:24.768 [2024-07-21 01:38:09.969542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:24.768 [2024-07-21 01:38:09.969563] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:24.768 [2024-07-21 01:38:09.969586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:24.768 [2024-07-21 01:38:09.969628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:24.768 [2024-07-21 01:38:09.969650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:24.768 [2024-07-21 01:38:09.969670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:24.768 [2024-07-21 01:38:09.969694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.768 [2024-07-21 01:38:09.969717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:24.768 [2024-07-21 01:38:09.969741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.388 ms 00:28:24.768 [2024-07-21 01:38:09.969771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.973136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.768 [2024-07-21 01:38:09.973183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:24.768 [2024-07-21 01:38:09.973209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:28:24.768 [2024-07-21 01:38:09.973231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.973441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.768 [2024-07-21 01:38:09.973464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:24.768 [2024-07-21 01:38:09.973498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:28:24.768 [2024-07-21 01:38:09.973520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.984768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:09.984813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.768 [2024-07-21 01:38:09.984845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:09.984863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.984931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:09.984948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.768 [2024-07-21 01:38:09.984969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:09.984984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.985055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:09.985075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.768 [2024-07-21 01:38:09.985090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:09.985105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:09.985130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:09.985146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.768 [2024-07-21 01:38:09.985162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:09.985182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.005534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.005575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.768 [2024-07-21 01:38:10.005589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.005599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.768 [2024-07-21 01:38:10.020525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.768 [2024-07-21 01:38:10.020610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.768 [2024-07-21 01:38:10.020688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.768 [2024-07-21 01:38:10.020816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:24.768 [2024-07-21 01:38:10.020895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.020963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.020975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.768 [2024-07-21 01:38:10.020986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.768 [2024-07-21 01:38:10.020996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.768 [2024-07-21 01:38:10.021047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.768 [2024-07-21 01:38:10.021071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.768 [2024-07-21 01:38:10.021083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.769 [2024-07-21 01:38:10.021093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.769 [2024-07-21 01:38:10.021242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.387 ms, result 0 00:28:25.337 00:28:25.337 00:28:25.337 01:38:10 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:27.240 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:27.240 01:38:12 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:27.240 [2024-07-21 01:38:12.106729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:27.240 [2024-07-21 01:38:12.106867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96552 ] 00:28:27.240 [2024-07-21 01:38:12.272593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.240 [2024-07-21 01:38:12.335430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.240 [2024-07-21 01:38:12.480493] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.240 [2024-07-21 01:38:12.480581] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.500 [2024-07-21 01:38:12.634951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.635000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:27.500 [2024-07-21 01:38:12.635016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:27.500 [2024-07-21 01:38:12.635033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.635093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.635105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.500 [2024-07-21 01:38:12.635123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:27.500 [2024-07-21 01:38:12.635136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.635156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:27.500 [2024-07-21 01:38:12.635357] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:27.500 [2024-07-21 01:38:12.635375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.635395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.500 [2024-07-21 01:38:12.635406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:28:27.500 [2024-07-21 01:38:12.635415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.635764] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:27.500 [2024-07-21 01:38:12.635791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.635803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:27.500 [2024-07-21 01:38:12.635814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:27.500 [2024-07-21 01:38:12.635847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.635934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.635947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:27.500 [2024-07-21 01:38:12.635957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:27.500 [2024-07-21 01:38:12.635967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.636319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.636340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.500 [2024-07-21 01:38:12.636350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:28:27.500 [2024-07-21 01:38:12.636364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.636460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.636473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.500 [2024-07-21 01:38:12.636483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:27.500 [2024-07-21 01:38:12.636492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.636530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.636542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:27.500 [2024-07-21 01:38:12.636552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:27.500 [2024-07-21 01:38:12.636565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.636589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.500 [2024-07-21 01:38:12.639313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.639342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.500 [2024-07-21 01:38:12.639353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:28:27.500 [2024-07-21 01:38:12.639363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.639394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.639407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:27.500 [2024-07-21 01:38:12.639426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:27.500 [2024-07-21 01:38:12.639435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.639470] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:27.500 [2024-07-21 01:38:12.639494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:27.500 [2024-07-21 01:38:12.639540] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:27.500 [2024-07-21 01:38:12.639569] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:27.500 [2024-07-21 01:38:12.639650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:27.500 [2024-07-21 01:38:12.639663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:27.500 [2024-07-21 01:38:12.639675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:27.500 [2024-07-21 01:38:12.639692] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:27.500 [2024-07-21 01:38:12.639718] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:27.500 [2024-07-21 01:38:12.639730] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:27.500 [2024-07-21 01:38:12.639740] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:27.500 [2024-07-21 01:38:12.639749] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:27.500 [2024-07-21 01:38:12.639758] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:27.500 [2024-07-21 01:38:12.639772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.639788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:27.500 [2024-07-21 01:38:12.639798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:27.500 [2024-07-21 01:38:12.639815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.500 [2024-07-21 01:38:12.639895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.500 [2024-07-21 01:38:12.639913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:27.501 [2024-07-21 01:38:12.639922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:27.501 [2024-07-21 01:38:12.639931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.501 [2024-07-21 01:38:12.640012] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:27.501 [2024-07-21 01:38:12.640026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:27.501 [2024-07-21 01:38:12.640036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:27.501 [2024-07-21 01:38:12.640064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640073] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:27.501 [2024-07-21 01:38:12.640094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.501 [2024-07-21 01:38:12.640111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:27.501 [2024-07-21 01:38:12.640121] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:27.501 [2024-07-21 01:38:12.640133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.501 [2024-07-21 01:38:12.640142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:27.501 [2024-07-21 01:38:12.640159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:27.501 [2024-07-21 01:38:12.640168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:27.501 [2024-07-21 01:38:12.640185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:27.501 [2024-07-21 01:38:12.640212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:27.501 [2024-07-21 01:38:12.640238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:27.501 [2024-07-21 01:38:12.640268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:27.501 [2024-07-21 01:38:12.640294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:27.501 [2024-07-21 01:38:12.640319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.501 [2024-07-21 01:38:12.640336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:27.501 [2024-07-21 01:38:12.640344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:27.501 [2024-07-21 01:38:12.640352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.501 [2024-07-21 01:38:12.640361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:27.501 [2024-07-21 01:38:12.640369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:27.501 [2024-07-21 01:38:12.640378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:27.501 [2024-07-21 01:38:12.640402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:27.501 [2024-07-21 01:38:12.640410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640418] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:27.501 [2024-07-21 01:38:12.640428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:27.501 [2024-07-21 01:38:12.640440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.501 [2024-07-21 01:38:12.640458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:27.501 [2024-07-21 01:38:12.640467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:27.501 [2024-07-21 01:38:12.640475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:27.501 [2024-07-21 01:38:12.640484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:27.501 [2024-07-21 01:38:12.640493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:27.501 [2024-07-21 01:38:12.640501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:27.501 [2024-07-21 01:38:12.640511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:27.501 [2024-07-21 01:38:12.640522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:27.501 [2024-07-21 01:38:12.640546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:27.501 [2024-07-21 01:38:12.640556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:27.501 [2024-07-21 01:38:12.640567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:27.501 [2024-07-21 01:38:12.640577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:27.501 [2024-07-21 01:38:12.640586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:27.501 [2024-07-21 01:38:12.640596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:27.501 [2024-07-21 01:38:12.640605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:27.501 [2024-07-21 01:38:12.640615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:27.501 [2024-07-21 01:38:12.640625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:27.501 [2024-07-21 01:38:12.640671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:27.501 [2024-07-21 01:38:12.640681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:27.501 [2024-07-21 01:38:12.640704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:27.501 [2024-07-21 01:38:12.640714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:27.501 [2024-07-21 01:38:12.640724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:27.501 [2024-07-21 01:38:12.640734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.501 [2024-07-21 01:38:12.640752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:27.501 [2024-07-21 01:38:12.640762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:28:27.501 [2024-07-21 01:38:12.640771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.501 [2024-07-21 01:38:12.670533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.501 [2024-07-21 01:38:12.670628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.501 [2024-07-21 01:38:12.670672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.754 ms 00:28:27.501 [2024-07-21 01:38:12.670707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.501 [2024-07-21 01:38:12.670991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.501 [2024-07-21 01:38:12.671031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:27.501 [2024-07-21 01:38:12.671094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:28:27.501 [2024-07-21 01:38:12.671125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.692888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.692933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.502 [2024-07-21 01:38:12.692952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.635 ms 00:28:27.502 [2024-07-21 01:38:12.692966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.693022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.693043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.502 [2024-07-21 01:38:12.693059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.502 [2024-07-21 01:38:12.693073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.693216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.693244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.502 [2024-07-21 01:38:12.693260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:27.502 [2024-07-21 01:38:12.693275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.693447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.693466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.502 [2024-07-21 01:38:12.693481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:28:27.502 [2024-07-21 01:38:12.693495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.703925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.703960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.502 [2024-07-21 01:38:12.703972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.417 ms 00:28:27.502 [2024-07-21 01:38:12.703990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.704131] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:27.502 [2024-07-21 01:38:12.704148] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:27.502 [2024-07-21 01:38:12.704161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.704172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:27.502 [2024-07-21 01:38:12.704187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:27.502 [2024-07-21 01:38:12.704197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.714344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.714373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:27.502 [2024-07-21 01:38:12.714385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.147 ms 00:28:27.502 [2024-07-21 01:38:12.714395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.714512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.714540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:27.502 [2024-07-21 01:38:12.714562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:27.502 [2024-07-21 01:38:12.714575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.714626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.714644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:27.502 [2024-07-21 01:38:12.714654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:27.502 [2024-07-21 01:38:12.714664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.714930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.714969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:27.502 [2024-07-21 01:38:12.714980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:28:27.502 [2024-07-21 01:38:12.714990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.715009] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:27.502 [2024-07-21 01:38:12.715025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.715036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:27.502 [2024-07-21 01:38:12.715045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:27.502 [2024-07-21 01:38:12.715058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.723159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:27.502 [2024-07-21 01:38:12.723329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.723350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:27.502 [2024-07-21 01:38:12.723361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.263 ms 00:28:27.502 [2024-07-21 01:38:12.723375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.725477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.725507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:27.502 [2024-07-21 01:38:12.725518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:28:27.502 [2024-07-21 01:38:12.725528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.725604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.725617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:27.502 [2024-07-21 01:38:12.725628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:27.502 [2024-07-21 01:38:12.725642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.725683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.725695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:27.502 [2024-07-21 01:38:12.725705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:27.502 [2024-07-21 01:38:12.725723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.725769] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:27.502 [2024-07-21 01:38:12.725781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.725791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:27.502 [2024-07-21 01:38:12.725801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:27.502 [2024-07-21 01:38:12.725814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.731030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.731065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:27.502 [2024-07-21 01:38:12.731077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:28:27.502 [2024-07-21 01:38:12.731087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.731151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.502 [2024-07-21 01:38:12.731162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:27.502 [2024-07-21 01:38:12.731172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:27.502 [2024-07-21 01:38:12.731182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.502 [2024-07-21 01:38:12.732458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.244 ms, result 0 00:29:11.779  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (22 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (22 MBps) Copying: 185/1024 [MB] (22 MBps) Copying: 208/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (23 MBps) Copying: 323/1024 [MB] (23 MBps) Copying: 346/1024 [MB] (22 MBps) Copying: 369/1024 [MB] (22 MBps) Copying: 391/1024 [MB] (22 MBps) Copying: 415/1024 [MB] (23 MBps) Copying: 439/1024 [MB] (24 MBps) Copying: 463/1024 [MB] (23 MBps) Copying: 486/1024 [MB] (23 MBps) Copying: 509/1024 [MB] (23 MBps) Copying: 532/1024 [MB] (22 MBps) Copying: 557/1024 [MB] (24 MBps) Copying: 582/1024 [MB] (25 MBps) Copying: 608/1024 [MB] (25 MBps) Copying: 632/1024 [MB] (24 MBps) Copying: 656/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (24 MBps) Copying: 705/1024 [MB] (24 MBps) Copying: 729/1024 [MB] (24 MBps) Copying: 752/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (22 MBps) Copying: 802/1024 [MB] (27 MBps) Copying: 826/1024 [MB] (24 MBps) Copying: 848/1024 [MB] (22 MBps) Copying: 872/1024 [MB] (23 MBps) Copying: 895/1024 [MB] (23 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 943/1024 [MB] (24 MBps) Copying: 966/1024 [MB] (23 MBps) Copying: 990/1024 [MB] (23 MBps) Copying: 1014/1024 [MB] (23 MBps) Copying: 1048316/1048576 [kB] (9772 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-21 01:38:56.913000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.779 [2024-07-21 01:38:56.913074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.779 [2024-07-21 01:38:56.913093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:11.779 [2024-07-21 01:38:56.913105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.779 [2024-07-21 01:38:56.915227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.779 [2024-07-21 01:38:56.917781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.779 [2024-07-21 01:38:56.917817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.779 [2024-07-21 01:38:56.917860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:29:11.779 [2024-07-21 01:38:56.917875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.779 [2024-07-21 01:38:56.926966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.779 [2024-07-21 01:38:56.927001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.779 [2024-07-21 01:38:56.927013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.294 ms 00:29:11.779 [2024-07-21 01:38:56.927023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.779 [2024-07-21 01:38:56.927056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.779 [2024-07-21 01:38:56.927068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:11.779 [2024-07-21 01:38:56.927079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:11.779 [2024-07-21 01:38:56.927089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.779 [2024-07-21 01:38:56.927149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.779 [2024-07-21 01:38:56.927163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:11.779 [2024-07-21 01:38:56.927173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:11.779 [2024-07-21 01:38:56.927182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.779 [2024-07-21 01:38:56.927206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.779 [2024-07-21 01:38:56.927220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:29:11.779 [2024-07-21 01:38:56.927232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.779 [2024-07-21 01:38:56.927931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.927941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.927975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.927985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.927995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.780 [2024-07-21 01:38:56.928496] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.780 [2024-07-21 01:38:56.928505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:29:11.780 [2024-07-21 01:38:56.928516] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:29:11.780 [2024-07-21 01:38:56.928533] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129056 00:29:11.780 [2024-07-21 01:38:56.928543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:29:11.780 [2024-07-21 01:38:56.928553] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:29:11.780 [2024-07-21 01:38:56.928567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.780 [2024-07-21 01:38:56.928577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.780 [2024-07-21 01:38:56.928586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.780 [2024-07-21 01:38:56.928595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.780 [2024-07-21 01:38:56.928603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.780 [2024-07-21 01:38:56.928612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.780 [2024-07-21 01:38:56.928622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.780 [2024-07-21 01:38:56.928631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:29:11.780 [2024-07-21 01:38:56.928641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.931242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.780 [2024-07-21 01:38:56.931270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.780 [2024-07-21 01:38:56.931285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:29:11.780 [2024-07-21 01:38:56.931294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.931465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.780 [2024-07-21 01:38:56.931484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.780 [2024-07-21 01:38:56.931494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:29:11.780 [2024-07-21 01:38:56.931504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.940596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.940620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.780 [2024-07-21 01:38:56.940631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.940641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.940688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.940699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.780 [2024-07-21 01:38:56.940709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.940718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.940776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.940793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.780 [2024-07-21 01:38:56.940803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.940812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.940840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.940852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.780 [2024-07-21 01:38:56.940862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.940871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.957830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.957866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:11.780 [2024-07-21 01:38:56.957878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.957888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:11.780 [2024-07-21 01:38:56.972536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.972546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:11.780 [2024-07-21 01:38:56.972624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.972634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:11.780 [2024-07-21 01:38:56.972695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.972713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:11.780 [2024-07-21 01:38:56.972808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.972822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:11.780 [2024-07-21 01:38:56.972892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.972902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.972944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.972976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:11.780 [2024-07-21 01:38:56.972986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.780 [2024-07-21 01:38:56.973007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.780 [2024-07-21 01:38:56.973059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.780 [2024-07-21 01:38:56.973082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:11.780 [2024-07-21 01:38:56.973092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.781 [2024-07-21 01:38:56.973103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.781 [2024-07-21 01:38:56.973245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 60.903 ms, result 0 00:29:12.717 00:29:12.717 00:29:12.717 01:38:57 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:12.717 [2024-07-21 01:38:57.948653] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:12.717 [2024-07-21 01:38:57.948796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97011 ] 00:29:12.975 [2024-07-21 01:38:58.117543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.975 [2024-07-21 01:38:58.195451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.235 [2024-07-21 01:38:58.342977] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.235 [2024-07-21 01:38:58.343053] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:13.235 [2024-07-21 01:38:58.497710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.497764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:13.235 [2024-07-21 01:38:58.497788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:13.235 [2024-07-21 01:38:58.497798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.497875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.497888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.235 [2024-07-21 01:38:58.497901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:13.235 [2024-07-21 01:38:58.497918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.497939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:13.235 [2024-07-21 01:38:58.498219] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:13.235 [2024-07-21 01:38:58.498246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.498260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.235 [2024-07-21 01:38:58.498271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:13.235 [2024-07-21 01:38:58.498280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.498632] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:13.235 [2024-07-21 01:38:58.498664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.498675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:13.235 [2024-07-21 01:38:58.498693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:13.235 [2024-07-21 01:38:58.498708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.498764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.498774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:13.235 [2024-07-21 01:38:58.498784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:13.235 [2024-07-21 01:38:58.498794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.499169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.499190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.235 [2024-07-21 01:38:58.499201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:29:13.235 [2024-07-21 01:38:58.499214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.499302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.499314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.235 [2024-07-21 01:38:58.499324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:13.235 [2024-07-21 01:38:58.499337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.499364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.499375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:13.235 [2024-07-21 01:38:58.499394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:13.235 [2024-07-21 01:38:58.499414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.499437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:13.235 [2024-07-21 01:38:58.502506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.502643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.235 [2024-07-21 01:38:58.502795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:29:13.235 [2024-07-21 01:38:58.502853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.502917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.503024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:13.235 [2024-07-21 01:38:58.503114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:13.235 [2024-07-21 01:38:58.503145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.503213] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:13.235 [2024-07-21 01:38:58.503264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:13.235 [2024-07-21 01:38:58.503438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:13.235 [2024-07-21 01:38:58.503500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:13.235 [2024-07-21 01:38:58.503693] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:13.235 [2024-07-21 01:38:58.503748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:13.235 [2024-07-21 01:38:58.503871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:13.235 [2024-07-21 01:38:58.503967] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:13.235 [2024-07-21 01:38:58.504019] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:13.235 [2024-07-21 01:38:58.504117] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:13.235 [2024-07-21 01:38:58.504151] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:13.235 [2024-07-21 01:38:58.504183] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:13.235 [2024-07-21 01:38:58.504241] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:13.235 [2024-07-21 01:38:58.504286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.504316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:13.235 [2024-07-21 01:38:58.504349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:29:13.235 [2024-07-21 01:38:58.504457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.504584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.235 [2024-07-21 01:38:58.504624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:13.235 [2024-07-21 01:38:58.504655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:13.235 [2024-07-21 01:38:58.504684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.235 [2024-07-21 01:38:58.504872] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:13.235 [2024-07-21 01:38:58.504918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:13.235 [2024-07-21 01:38:58.504950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.235 [2024-07-21 01:38:58.504981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.235 [2024-07-21 01:38:58.505061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:13.235 [2024-07-21 01:38:58.505076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:13.235 [2024-07-21 01:38:58.505086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:13.235 [2024-07-21 01:38:58.505096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:13.235 [2024-07-21 01:38:58.505106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:13.235 [2024-07-21 01:38:58.505116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.235 [2024-07-21 01:38:58.505132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:13.236 [2024-07-21 01:38:58.505142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:13.236 [2024-07-21 01:38:58.505151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:13.236 [2024-07-21 01:38:58.505161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:13.236 [2024-07-21 01:38:58.505181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:13.236 [2024-07-21 01:38:58.505191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:13.236 [2024-07-21 01:38:58.505211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:13.236 [2024-07-21 01:38:58.505241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:13.236 [2024-07-21 01:38:58.505270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:13.236 [2024-07-21 01:38:58.505304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:13.236 [2024-07-21 01:38:58.505334] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:13.236 [2024-07-21 01:38:58.505364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.236 [2024-07-21 01:38:58.505382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:13.236 [2024-07-21 01:38:58.505392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:13.236 [2024-07-21 01:38:58.505401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:13.236 [2024-07-21 01:38:58.505410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:13.236 [2024-07-21 01:38:58.505420] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:13.236 [2024-07-21 01:38:58.505429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:13.236 [2024-07-21 01:38:58.505449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:13.236 [2024-07-21 01:38:58.505462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505472] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:13.236 [2024-07-21 01:38:58.505483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:13.236 [2024-07-21 01:38:58.505497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:13.236 [2024-07-21 01:38:58.505532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:13.236 [2024-07-21 01:38:58.505542] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:13.236 [2024-07-21 01:38:58.505552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:13.236 [2024-07-21 01:38:58.505563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:13.236 [2024-07-21 01:38:58.505573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:13.236 [2024-07-21 01:38:58.505583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:13.236 [2024-07-21 01:38:58.505594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:13.236 [2024-07-21 01:38:58.505614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:13.236 [2024-07-21 01:38:58.505647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:13.236 [2024-07-21 01:38:58.505659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:13.236 [2024-07-21 01:38:58.505674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:13.236 [2024-07-21 01:38:58.505686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:13.236 [2024-07-21 01:38:58.505697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:13.236 [2024-07-21 01:38:58.505708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:13.236 [2024-07-21 01:38:58.505720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:13.236 [2024-07-21 01:38:58.505731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:13.236 [2024-07-21 01:38:58.505761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:13.236 [2024-07-21 01:38:58.505823] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:13.236 [2024-07-21 01:38:58.505854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:13.236 [2024-07-21 01:38:58.505878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:13.236 [2024-07-21 01:38:58.505889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:13.236 [2024-07-21 01:38:58.505903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:13.236 [2024-07-21 01:38:58.505916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.236 [2024-07-21 01:38:58.505927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:13.236 [2024-07-21 01:38:58.505938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:29:13.236 [2024-07-21 01:38:58.505949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.236 [2024-07-21 01:38:58.537522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.236 [2024-07-21 01:38:58.537629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.236 [2024-07-21 01:38:58.537675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.565 ms 00:29:13.236 [2024-07-21 01:38:58.537710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.236 [2024-07-21 01:38:58.537989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.236 [2024-07-21 01:38:58.538030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:13.236 [2024-07-21 01:38:58.538065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:29:13.236 [2024-07-21 01:38:58.538125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.559904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.559946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.496 [2024-07-21 01:38:58.559965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.655 ms 00:29:13.496 [2024-07-21 01:38:58.559980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.560025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.560041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.496 [2024-07-21 01:38:58.560057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:13.496 [2024-07-21 01:38:58.560072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.560219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.560237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.496 [2024-07-21 01:38:58.560253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:13.496 [2024-07-21 01:38:58.560268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.560432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.560451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.496 [2024-07-21 01:38:58.560465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:13.496 [2024-07-21 01:38:58.560479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.571301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.571336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.496 [2024-07-21 01:38:58.571349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.811 ms 00:29:13.496 [2024-07-21 01:38:58.571360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.571500] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:13.496 [2024-07-21 01:38:58.571527] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:13.496 [2024-07-21 01:38:58.571540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.571551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:13.496 [2024-07-21 01:38:58.571565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:13.496 [2024-07-21 01:38:58.571576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.581718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.581750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:13.496 [2024-07-21 01:38:58.581763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.140 ms 00:29:13.496 [2024-07-21 01:38:58.581773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.581912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.581924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:13.496 [2024-07-21 01:38:58.581948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:13.496 [2024-07-21 01:38:58.581991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.582049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.582064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:13.496 [2024-07-21 01:38:58.582075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:13.496 [2024-07-21 01:38:58.582085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.582332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.582345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:13.496 [2024-07-21 01:38:58.582356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:29:13.496 [2024-07-21 01:38:58.582373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.582391] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:13.496 [2024-07-21 01:38:58.582408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.582419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:13.496 [2024-07-21 01:38:58.582429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:13.496 [2024-07-21 01:38:58.582446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.590728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:13.496 [2024-07-21 01:38:58.590930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.590945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:13.496 [2024-07-21 01:38:58.590957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.477 ms 00:29:13.496 [2024-07-21 01:38:58.590971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.593142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.593170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:13.496 [2024-07-21 01:38:58.593181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:29:13.496 [2024-07-21 01:38:58.593191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.593260] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:13.496 [2024-07-21 01:38:58.593856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.593870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:13.496 [2024-07-21 01:38:58.593886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:29:13.496 [2024-07-21 01:38:58.593896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.593938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.593951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:13.496 [2024-07-21 01:38:58.593962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:13.496 [2024-07-21 01:38:58.593971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.594007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:13.496 [2024-07-21 01:38:58.594019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.594029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:13.496 [2024-07-21 01:38:58.594046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:13.496 [2024-07-21 01:38:58.594059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.599371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.599404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:13.496 [2024-07-21 01:38:58.599417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.302 ms 00:29:13.496 [2024-07-21 01:38:58.599427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.599489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.496 [2024-07-21 01:38:58.599500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:13.496 [2024-07-21 01:38:58.599520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:13.496 [2024-07-21 01:38:58.599535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.496 [2024-07-21 01:38:58.604880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.124 ms, result 0 00:29:54.000  Copying: 26/1024 [MB] (26 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 151/1024 [MB] (24 MBps) Copying: 176/1024 [MB] (24 MBps) Copying: 200/1024 [MB] (24 MBps) Copying: 225/1024 [MB] (24 MBps) Copying: 250/1024 [MB] (24 MBps) Copying: 275/1024 [MB] (25 MBps) Copying: 301/1024 [MB] (25 MBps) Copying: 324/1024 [MB] (23 MBps) Copying: 348/1024 [MB] (23 MBps) Copying: 372/1024 [MB] (24 MBps) Copying: 399/1024 [MB] (27 MBps) Copying: 426/1024 [MB] (26 MBps) Copying: 451/1024 [MB] (25 MBps) Copying: 476/1024 [MB] (25 MBps) Copying: 501/1024 [MB] (25 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 553/1024 [MB] (26 MBps) Copying: 580/1024 [MB] (26 MBps) Copying: 606/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (26 MBps) Copying: 685/1024 [MB] (26 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 737/1024 [MB] (25 MBps) Copying: 763/1024 [MB] (26 MBps) Copying: 789/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (26 MBps) Copying: 841/1024 [MB] (25 MBps) Copying: 866/1024 [MB] (25 MBps) Copying: 892/1024 [MB] (25 MBps) Copying: 917/1024 [MB] (25 MBps) Copying: 942/1024 [MB] (25 MBps) Copying: 968/1024 [MB] (25 MBps) Copying: 993/1024 [MB] (25 MBps) Copying: 1018/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-21 01:39:39.066926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.000 [2024-07-21 01:39:39.067042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:54.000 [2024-07-21 01:39:39.067077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:54.000 [2024-07-21 01:39:39.067100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.000 [2024-07-21 01:39:39.067145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.000 [2024-07-21 01:39:39.069428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.000 [2024-07-21 01:39:39.070014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:54.000 [2024-07-21 01:39:39.070049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:29:54.000 [2024-07-21 01:39:39.070081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.000 [2024-07-21 01:39:39.070501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.000 [2024-07-21 01:39:39.070537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:54.000 [2024-07-21 01:39:39.070561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:29:54.000 [2024-07-21 01:39:39.070583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.000 [2024-07-21 01:39:39.070639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.000 [2024-07-21 01:39:39.070663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:54.000 [2024-07-21 01:39:39.070702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:54.000 [2024-07-21 01:39:39.070723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.000 [2024-07-21 01:39:39.070847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.000 [2024-07-21 01:39:39.070872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:54.000 [2024-07-21 01:39:39.070894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:54.000 [2024-07-21 01:39:39.070915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.000 [2024-07-21 01:39:39.070948] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:54.000 [2024-07-21 01:39:39.070976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:29:54.000 [2024-07-21 01:39:39.071002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.071989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:54.000 [2024-07-21 01:39:39.072282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.072988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:54.001 [2024-07-21 01:39:39.073317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:54.001 [2024-07-21 01:39:39.073337] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 22795033-ca25-42d0-8831-7d3fb7d5cdbf 00:29:54.001 [2024-07-21 01:39:39.073360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:29:54.001 [2024-07-21 01:39:39.073401] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4640 00:29:54.001 [2024-07-21 01:39:39.073422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:29:54.001 [2024-07-21 01:39:39.073451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0069 00:29:54.001 [2024-07-21 01:39:39.073472] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:54.001 [2024-07-21 01:39:39.073493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:54.001 [2024-07-21 01:39:39.073515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:54.001 [2024-07-21 01:39:39.073535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:54.001 [2024-07-21 01:39:39.073555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:54.001 [2024-07-21 01:39:39.073575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.001 [2024-07-21 01:39:39.073596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:54.001 [2024-07-21 01:39:39.073618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:29:54.001 [2024-07-21 01:39:39.073638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.076872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.001 [2024-07-21 01:39:39.076911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:54.001 [2024-07-21 01:39:39.076927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:29:54.001 [2024-07-21 01:39:39.076941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.077136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.001 [2024-07-21 01:39:39.077154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:54.001 [2024-07-21 01:39:39.077170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:29:54.001 [2024-07-21 01:39:39.077183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.087425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.087457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.001 [2024-07-21 01:39:39.087479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.087490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.087548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.087559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.001 [2024-07-21 01:39:39.087570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.087581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.087652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.087666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.001 [2024-07-21 01:39:39.087676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.087693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.087711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.087722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.001 [2024-07-21 01:39:39.087734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.087743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.109024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.109070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.001 [2024-07-21 01:39:39.109084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.109094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.123947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.123981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.001 [2024-07-21 01:39:39.123994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.124005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.124062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.124075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.001 [2024-07-21 01:39:39.124092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.124102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.124145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.124156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.001 [2024-07-21 01:39:39.124168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.124179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.124245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.124259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.001 [2024-07-21 01:39:39.124274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.124284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.124341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.124355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:54.001 [2024-07-21 01:39:39.124365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.001 [2024-07-21 01:39:39.124376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.001 [2024-07-21 01:39:39.124429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.001 [2024-07-21 01:39:39.124443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.001 [2024-07-21 01:39:39.124453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.002 [2024-07-21 01:39:39.124467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.002 [2024-07-21 01:39:39.124516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.002 [2024-07-21 01:39:39.124539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.002 [2024-07-21 01:39:39.124549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.002 [2024-07-21 01:39:39.124560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.002 [2024-07-21 01:39:39.124714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.848 ms, result 0 00:29:54.261 00:29:54.261 00:29:54.261 01:39:39 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:56.165 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:56.165 Process with pid 95434 is not found 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95434 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95434 ']' 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95434 00:29:56.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95434) - No such process 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 95434 is not found' 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:29:56.165 Remove shared memory files 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_band_md /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_l2p_l1 /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_l2p_l2 /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_l2p_l2_ctx /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_nvc_md /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_p2l_pool /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_sb /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_sb_shm /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_trim_bitmap /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_trim_log /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_trim_md /dev/hugepages/ftl_22795033-ca25-42d0-8831-7d3fb7d5cdbf_vmap 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:29:56.165 ************************************ 00:29:56.165 END TEST ftl_restore_fast 00:29:56.165 ************************************ 00:29:56.165 00:29:56.165 real 3m16.630s 00:29:56.165 user 3m4.283s 00:29:56.165 sys 0m13.373s 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:56.165 01:39:41 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@14 -- # killprocess 88104 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@946 -- # '[' -z 88104 ']' 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@950 -- # kill -0 88104 00:29:56.165 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (88104) - No such process 00:29:56.165 Process with pid 88104 is not found 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 88104 is not found' 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97466 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:56.165 01:39:41 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97466 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@827 -- # '[' -z 97466 ']' 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:56.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:56.165 01:39:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:56.425 [2024-07-21 01:39:41.551018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:56.425 [2024-07-21 01:39:41.551159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97466 ] 00:29:56.425 [2024-07-21 01:39:41.714288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.684 [2024-07-21 01:39:41.787652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.252 01:39:42 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:57.252 01:39:42 ftl -- common/autotest_common.sh@860 -- # return 0 00:29:57.252 01:39:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:57.511 nvme0n1 00:29:57.511 01:39:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:57.511 01:39:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:57.511 01:39:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:57.512 01:39:42 ftl -- ftl/common.sh@28 -- # stores=97845aa3-b171-4bfd-bc30-abec04ad50b2 00:29:57.512 01:39:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:57.512 01:39:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97845aa3-b171-4bfd-bc30-abec04ad50b2 00:29:57.771 01:39:42 ftl -- ftl/ftl.sh@23 -- # killprocess 97466 00:29:57.771 01:39:42 ftl -- common/autotest_common.sh@946 -- # '[' -z 97466 ']' 00:29:57.771 01:39:42 ftl -- common/autotest_common.sh@950 -- # kill -0 97466 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@951 -- # uname 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97466 00:29:57.771 killing process with pid 97466 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97466' 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@965 -- # kill 97466 00:29:57.771 01:39:43 ftl -- common/autotest_common.sh@970 -- # wait 97466 00:29:58.339 01:39:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:58.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:58.906 Waiting for block devices as requested 00:29:58.906 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:58.906 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:59.165 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:59.165 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.445 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:04.445 01:39:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:04.445 01:39:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:04.445 Remove shared memory files 00:30:04.445 01:39:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:04.445 01:39:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:04.445 01:39:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:04.445 01:39:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:04.445 01:39:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:04.445 ************************************ 00:30:04.445 END TEST ftl 00:30:04.445 ************************************ 00:30:04.445 00:30:04.445 real 13m49.913s 00:30:04.445 user 15m33.347s 00:30:04.445 sys 1m44.804s 00:30:04.445 01:39:49 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:04.445 01:39:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:04.445 01:39:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:04.445 01:39:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:04.445 01:39:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:04.445 01:39:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:04.445 01:39:49 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:04.445 01:39:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:04.445 01:39:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:04.445 01:39:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:04.445 01:39:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:04.445 01:39:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:04.445 01:39:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:04.445 01:39:49 -- common/autotest_common.sh@10 -- # set +x 00:30:04.445 01:39:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:04.445 01:39:49 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:30:04.445 01:39:49 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:30:04.445 01:39:49 -- common/autotest_common.sh@10 -- # set +x 00:30:06.357 INFO: APP EXITING 00:30:06.357 INFO: killing all VMs 00:30:06.357 INFO: killing vhost app 00:30:06.357 INFO: EXIT DONE 00:30:06.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:07.231 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:07.231 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:07.231 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:07.231 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:07.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:08.065 Cleaning 00:30:08.065 Removing: /var/run/dpdk/spdk0/config 00:30:08.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:08.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:08.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:08.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:08.065 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:08.065 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:08.065 Removing: /var/run/dpdk/spdk0 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74068 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74234 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74428 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74521 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74550 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74661 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74681 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74845 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74918 00:30:08.065 Removing: /var/run/dpdk/spdk_pid74990 00:30:08.065 Removing: /var/run/dpdk/spdk_pid75082 00:30:08.065 Removing: /var/run/dpdk/spdk_pid75160 00:30:08.065 Removing: /var/run/dpdk/spdk_pid75199 00:30:08.065 Removing: /var/run/dpdk/spdk_pid75236 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75297 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75413 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75831 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75884 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75936 00:30:08.324 Removing: /var/run/dpdk/spdk_pid75951 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76027 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76037 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76112 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76128 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76181 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76199 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76241 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76259 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76389 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76420 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76501 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76560 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76582 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76647 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76688 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76724 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76759 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76800 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76836 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76871 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76907 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76948 00:30:08.324 Removing: /var/run/dpdk/spdk_pid76983 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77019 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77060 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77090 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77131 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77171 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77202 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77243 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77282 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77320 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77360 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77397 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77466 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77563 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77708 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77781 00:30:08.324 Removing: /var/run/dpdk/spdk_pid77812 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78233 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78319 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78413 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78455 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78486 00:30:08.324 Removing: /var/run/dpdk/spdk_pid78556 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79175 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79206 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79661 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79752 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79858 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79900 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79930 00:30:08.324 Removing: /var/run/dpdk/spdk_pid79951 00:30:08.324 Removing: /var/run/dpdk/spdk_pid81794 00:30:08.583 Removing: /var/run/dpdk/spdk_pid81915 00:30:08.583 Removing: /var/run/dpdk/spdk_pid81923 00:30:08.583 Removing: /var/run/dpdk/spdk_pid81936 00:30:08.583 Removing: /var/run/dpdk/spdk_pid81985 00:30:08.583 Removing: /var/run/dpdk/spdk_pid81989 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82001 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82046 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82050 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82062 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82107 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82111 00:30:08.583 Removing: /var/run/dpdk/spdk_pid82123 00:30:08.583 Removing: /var/run/dpdk/spdk_pid83492 00:30:08.583 Removing: /var/run/dpdk/spdk_pid83573 00:30:08.583 Removing: /var/run/dpdk/spdk_pid84461 00:30:08.583 Removing: /var/run/dpdk/spdk_pid84817 00:30:08.583 Removing: /var/run/dpdk/spdk_pid84882 00:30:08.583 Removing: /var/run/dpdk/spdk_pid84947 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85017 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85100 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85168 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85297 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85568 00:30:08.583 Removing: /var/run/dpdk/spdk_pid85599 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86022 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86201 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86293 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86389 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86432 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86458 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86743 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86781 00:30:08.583 Removing: /var/run/dpdk/spdk_pid86832 00:30:08.583 Removing: /var/run/dpdk/spdk_pid87181 00:30:08.583 Removing: /var/run/dpdk/spdk_pid87319 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88104 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88217 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88371 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88463 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88749 00:30:08.583 Removing: /var/run/dpdk/spdk_pid88991 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89346 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89528 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89667 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89703 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89835 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89854 00:30:08.583 Removing: /var/run/dpdk/spdk_pid89896 00:30:08.583 Removing: /var/run/dpdk/spdk_pid90091 00:30:08.583 Removing: /var/run/dpdk/spdk_pid90300 00:30:08.583 Removing: /var/run/dpdk/spdk_pid90755 00:30:08.583 Removing: /var/run/dpdk/spdk_pid91218 00:30:08.583 Removing: /var/run/dpdk/spdk_pid91691 00:30:08.583 Removing: /var/run/dpdk/spdk_pid92210 00:30:08.583 Removing: /var/run/dpdk/spdk_pid92374 00:30:08.583 Removing: /var/run/dpdk/spdk_pid92455 00:30:08.583 Removing: /var/run/dpdk/spdk_pid93083 00:30:08.583 Removing: /var/run/dpdk/spdk_pid93146 00:30:08.583 Removing: /var/run/dpdk/spdk_pid93633 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94008 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94532 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94654 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94685 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94732 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94782 00:30:08.841 Removing: /var/run/dpdk/spdk_pid94835 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95000 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95073 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95129 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95186 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95221 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95288 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95434 00:30:08.841 Removing: /var/run/dpdk/spdk_pid95638 00:30:08.841 Removing: /var/run/dpdk/spdk_pid96098 00:30:08.841 Removing: /var/run/dpdk/spdk_pid96552 00:30:08.841 Removing: /var/run/dpdk/spdk_pid97011 00:30:08.841 Removing: /var/run/dpdk/spdk_pid97466 00:30:08.841 Clean 00:30:08.841 01:39:54 -- common/autotest_common.sh@1447 -- # return 0 00:30:08.841 01:39:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:08.841 01:39:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.841 01:39:54 -- common/autotest_common.sh@10 -- # set +x 00:30:08.841 01:39:54 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:08.841 01:39:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.841 01:39:54 -- common/autotest_common.sh@10 -- # set +x 00:30:09.098 01:39:54 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:09.098 01:39:54 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:09.098 01:39:54 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:09.098 01:39:54 -- spdk/autotest.sh@391 -- # hash lcov 00:30:09.098 01:39:54 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:09.098 01:39:54 -- spdk/autotest.sh@393 -- # hostname 00:30:09.098 01:39:54 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:09.098 geninfo: WARNING: invalid characters removed from testname! 00:30:35.662 01:40:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:36.235 01:40:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:38.767 01:40:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:40.666 01:40:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:42.572 01:40:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:45.158 01:40:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.064 01:40:32 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:47.064 01:40:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:47.064 01:40:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:47.064 01:40:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.064 01:40:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.064 01:40:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.064 01:40:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.064 01:40:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.064 01:40:32 -- paths/export.sh@5 -- $ export PATH 00:30:47.064 01:40:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.064 01:40:32 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:47.064 01:40:32 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:47.064 01:40:32 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721526032.XXXXXX 00:30:47.064 01:40:32 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721526032.eyCMFb 00:30:47.064 01:40:32 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:47.064 01:40:32 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:30:47.064 01:40:32 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:47.064 01:40:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:47.064 01:40:32 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:47.064 01:40:32 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:47.064 01:40:32 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:47.064 01:40:32 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:30:47.064 01:40:32 -- common/autotest_common.sh@10 -- $ set +x 00:30:47.064 01:40:32 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:47.064 01:40:32 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:47.064 01:40:32 -- pm/common@17 -- $ local monitor 00:30:47.064 01:40:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:47.064 01:40:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:47.064 01:40:32 -- pm/common@25 -- $ sleep 1 00:30:47.064 01:40:32 -- pm/common@21 -- $ date +%s 00:30:47.064 01:40:32 -- pm/common@21 -- $ date +%s 00:30:47.064 01:40:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721526032 00:30:47.064 01:40:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721526032 00:30:47.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721526032_collect-cpu-load.pm.log 00:30:47.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721526032_collect-vmstat.pm.log 00:30:48.000 01:40:33 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:48.000 01:40:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:48.000 01:40:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:48.000 01:40:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:48.000 01:40:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:48.000 01:40:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:48.000 01:40:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:48.000 01:40:33 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:48.000 01:40:33 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:48.000 01:40:33 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:48.000 01:40:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:48.000 01:40:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:48.000 01:40:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:48.000 01:40:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:48.000 01:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.000 01:40:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:48.000 01:40:33 -- pm/common@44 -- $ pid=99188 00:30:48.000 01:40:33 -- pm/common@50 -- $ kill -TERM 99188 00:30:48.000 01:40:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:48.000 01:40:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:48.000 01:40:33 -- pm/common@44 -- $ pid=99189 00:30:48.000 01:40:33 -- pm/common@50 -- $ kill -TERM 99189 00:30:48.000 + [[ -n 5877 ]] 00:30:48.000 + sudo kill 5877 00:30:48.009 [Pipeline] } 00:30:48.027 [Pipeline] // timeout 00:30:48.031 [Pipeline] } 00:30:48.047 [Pipeline] // stage 00:30:48.052 [Pipeline] } 00:30:48.067 [Pipeline] // catchError 00:30:48.075 [Pipeline] stage 00:30:48.077 [Pipeline] { (Stop VM) 00:30:48.088 [Pipeline] sh 00:30:48.363 + vagrant halt 00:30:51.644 ==> default: Halting domain... 00:30:58.223 [Pipeline] sh 00:30:58.503 + vagrant destroy -f 00:31:01.035 ==> default: Removing domain... 00:31:01.980 [Pipeline] sh 00:31:02.259 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:02.269 [Pipeline] } 00:31:02.291 [Pipeline] // stage 00:31:02.296 [Pipeline] } 00:31:02.317 [Pipeline] // dir 00:31:02.323 [Pipeline] } 00:31:02.342 [Pipeline] // wrap 00:31:02.349 [Pipeline] } 00:31:02.364 [Pipeline] // catchError 00:31:02.373 [Pipeline] stage 00:31:02.375 [Pipeline] { (Epilogue) 00:31:02.390 [Pipeline] sh 00:31:02.669 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:07.991 [Pipeline] catchError 00:31:07.993 [Pipeline] { 00:31:08.005 [Pipeline] sh 00:31:08.288 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:08.547 Artifacts sizes are good 00:31:08.556 [Pipeline] } 00:31:08.573 [Pipeline] // catchError 00:31:08.584 [Pipeline] archiveArtifacts 00:31:08.590 Archiving artifacts 00:31:08.701 [Pipeline] cleanWs 00:31:08.713 [WS-CLEANUP] Deleting project workspace... 00:31:08.713 [WS-CLEANUP] Deferred wipeout is used... 00:31:08.720 [WS-CLEANUP] done 00:31:08.721 [Pipeline] } 00:31:08.738 [Pipeline] // stage 00:31:08.743 [Pipeline] } 00:31:08.758 [Pipeline] // node 00:31:08.763 [Pipeline] End of Pipeline 00:31:08.800 Finished: SUCCESS